Trigonometric Moments for Editable Structured Light Range Finding

In: Vision, Modeling, and Visualization 2019 (Okt. 2019)
 

Abstract

Structured-light methods remain one of the leading technologies in high quality 3D scanning, specifically for the acquisition of single objects and simple scenes. For more complex scene geometries, however, non-local light transport (e.g. interreflections, sub-surface scattering) comes into play, which leads to errors in the depth estimation. Probing the light transport tensor, which describes the global mapping between illumination and observed intensity under the influence of the scene can help to understand and correct these errors, but requires extensive scanning. We aim to recover a 3D subset of the full 4D light transport tensor, which represents the scene as illuminated by line patterns, rendering the approach especially useful for triangulation methods. To this end we propose a frequency-domain approach based on spectral estimation to reduce the number of required input images. Our method can be applied independently on each pixel of the observing camera, making it perfectly parallelizable with respect to the camera pixels. The result is a closed-form representation of the scene reflection recorded under line illumination, which, if necessary, masks pixels with complex global light transport contributions and, if possible, enables the correction of such measurements via data-driven semi-automatic editing.

Bilder

Paper herunterladen

Paper herunterladen

Zusätzliches Material

Bibtex

@ARTICLE{Werner2019MBSL,
    author = {Werner, Sebastian and Iseringhausen, Julian and Callenberg, Clara and Hullin, Matthias B.},
     title = {Trigonometric Moments for Editable Structured Light Range Finding},
   journal = {Vision, Modeling, and Visualization 2019},
      year = {2019},
     month = oct,
  abstract = {Structured-light methods remain one of the leading technologies in high quality 3D scanning,
              specifically for the acquisition of single objects and simple scenes. For more complex scene
              geometries, however, non-local light transport (e.g. interreflections, sub-surface scattering) comes
              into play, which leads to errors in the depth estimation. Probing the light transport tensor, which
              describes the global mapping between illumination and observed intensity under the influence of the
              scene can help to understand and correct these errors, but requires extensive scanning. We aim to
              recover a 3D subset of the full 4D light transport tensor, which represents the scene as illuminated
              by line patterns, rendering the approach especially useful for triangulation methods. To this end we
              propose a frequency-domain approach based on spectral estimation to reduce the number of required
              input images. Our method can be applied independently on each pixel of the observing camera, making
              it perfectly parallelizable with respect to the camera pixels. The result is a closed-form
              representation of the scene reflection recorded under line illumination, which, if necessary, masks
              pixels with complex global light transport contributions and, if possible, enables the correction of
              such measurements via data-driven semi-automatic editing.},
       url = {https://doi.org/10.2312/vmv.20191315},
       doi = {10.2312/vmv.20191315}
}