Multipath Inversion for Time-of-Flight Data




The goal of this project is to develop range imaging setups and reconstruction techniques that are not only robust to multi-path scattering, but use multi-path contributions as an additional source of information. This requires the development of advanced image formation models, and methods to solve the corresponding inverse problems.

Using echoes of light to turn walls into mirrors

To demonstrate the power of multi-path analysis on time-of-flight data, we developed an imaging setup consisting of a laser source and a standard time-of-flight camera. Both are modulated at a very fast frequency (tens of MHz). Such sensors have the property that they are only sensitive to light that is modulated at the same frequency, and they are virtually insensitive to ambient light.

In earlier work, we showed that such cameras capture what can be considered a coded version of an optical impulse response, i.e., a video of light in flight, or transient image. Prior to this discovery, the acquisition of such data used to require expensive and extremely sensitive equipment (a streak camera and a femtosecond laser). It is well-known that impulse responses contain valuable information about the geometry of a scene. For instance, researchers at the MIT Media Lab were able to use streak camera measurements to recover detailed geometry of objects outside the line of sight of both light source and camera.

In our work on Diffuse Mirrors [Heide et al. 2014], we demonstrate that comparable results are quite possible even when no explicit time-resolved data is available. Our reconstruction method is based on an elaborate forward model of light transport, in combination with our existing sensor model, to infer scene geometry from light that has been diffusely reflected three times. The result is a density estimate for a hidden volume directly from correlation measurements taken by the time-of-flight camera, without the need to ever reconstruct the transient image.


3D drawing of the imaging setup. Camera (blue) and laser (red) both stare at a diffuse wall. From indirect reflections, we reconstruct objects in the unknown volume (green).
Photo of objects placed in the reconstruction volume.
Reconstructed objects.


Lei Xiao, Felix Heide, Matthew O'Toole, Andreas Kolb, Matthias B. Hullin, Kiriakos N. Kutulakos, and Wolfgang Heidrich
In proceedings of Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), 2015
Mohit Gupta, Shree K. Nayar, Matthias B. Hullin, and Jaime Martín
In: ACM Transactions on Graphics (2015), 34
In: ACM Trans. Graph. (Proc. SIGGRAPH Asia) (Nov. 2015), 34:6
Matthew O'Toole, Felix Heide, Lei Xiao, Matthias B. Hullin, Wolfgang Heidrich, and Kiriakos N. Kutulakos
In: ACM Trans. Graph. (Proc. SIGGRAPH) (Aug. 2014), 33:4
Felix Heide, Lei Xiao, Wolfgang Heidrich, and Matthias B. Hullin
In proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages to appear, June 2014