Rapid material capture through sparse and multiplexed measurements

In: Computers & Graphics (Juni 2018), 73(26-36)
 

Abstract

Among the many models for material appearance, data-driven representations like bidirectional texture functions (BTFs) play an important role as they provide accurate real-time reproduction of complex light transport effects such as interreflections. However, their acquisition involves time-consuming capturing of many thousands of bidirectional samples in order to avoid interpolation artifacts. Furthermore, high dynamic range imaging including many and long exposure steps is necessary in the presence of low albedo or self-shadowing. So far, these problems have been dealt with separately by means of sparse reconstruction and multiplexed illumination techniques, respectively. Existing methods rely on data-driven models learned on data that has been range-reduced in a way that made their simultaneous application impossible. In this paper, we address both problems at once through a novel method for learning data-driven appearance models, based on moving the dynamic range reduction from the data to the metric. Specifically, we learn models by minimizing the relative L2 error on the original data instead of the absolute L2 error on range-reduced data. We demonstrate that the models thus obtained allow for faithful reconstruction of material appearance from sparse and illumination-multiplexed measurements, greatly reducing both the number of images and the shutter times required. As a result, we are able to reduce acquisition times down to the order of minutes from what used to be the order of hours.

Bilder

Bibtex

@ARTICLE{DENBROK201826,
    author = {den Brok, Dennis and Weinmann, Michael and Klein, Reinhard},
     pages = {26--36},
     title = {Rapid material capture through sparse and multiplexed measurements},
   journal = {Computers {\&} Graphics},
    volume = {73},
      year = {2018},
     month = jun,
  abstract = {Among the many models for material appearance, data-driven representations like bidirectional
              texture functions (BTFs) play an important role as they provide accurate real-time reproduction of
              complex light transport effects such as interreflections. However, their acquisition involves
              time-consuming capturing of many thousands of bidirectional samples in order to avoid interpolation
              artifacts. Furthermore, high dynamic range imaging including many and long exposure steps is
              necessary in the presence of low albedo or self-shadowing. So far, these problems have been dealt
              with separately by means of sparse reconstruction and multiplexed illumination techniques,
              respectively. Existing methods rely on data-driven models learned on data that has been
              range-reduced in a way that made their simultaneous application impossible. In this paper, we
              address both problems at once through a novel method for learning data-driven appearance models,
              based on moving the dynamic range reduction from the data to the metric. Specifically, we learn
              models by minimizing the relative L2 error on the original data instead of the absolute L2 error on
              range-reduced data. We demonstrate that the models thus obtained allow for faithful reconstruction
              of material appearance from sparse and illumination-multiplexed measurements, greatly reducing both
              the number of images and the shutter times required. As a result, we are able to reduce acquisition
              times down to the order of minutes from what used to be the order of hours.},
      issn = {0097-8493},
       url = {http://www.sciencedirect.com/science/article/pii/S0097849318300360},
       doi = {https://doi.org/10.1016/j.cag.2018.03.003}
}