Sparse Spatial Shading in Augmented Reality

Rikard Olajos
Lund University

Michael Doggett
Lund University

GRAPP, February 2024

Abstract

In this work, we present a method for acquiring, storing, and using scene data to enable realistic shading of virtual objects in an augmented reality application. Our method allows for sparse sampling of the environment’s lighting condition while still delivering a convincing shading to the rendered objects. We use common camera parameters, provided by a head-mounted camera, to get lighting information from the scene and store them in a tree structure, saving both locality and directionality of the data. This makes our approach suitable for implementation in augmented reality applications where the sparse and unpredictable nature of the data samples captured from a head-mounted device can be problematic. The construction of the data structure and the shading of virtual objects happen in real time, and without requiring high-performance hardware. Our model is designed for augmented reality devices with optical see-through displays, and in this work we used Microsoft’s HoloLens 2.

Downloads

Paper
BibTeX entry

@conference{grapp24,
author={Rikard Olajos. and Michael Doggett.},
title={Sparse Spatial Shading in Augmented Reality},
booktitle={Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - GRAPP},
year={2024},
pages={293-299},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0012429300003660},
isbn={978-989-758-679-8},
issn={2184-4321},
}

Last update: Thursday, 10-Sep-2009 13:23:22 CEST
Page Manager: Mike Doggett
Publisher: Department of Computer Science