Per-Image Super-Resolution for Material BTFs

In proceedings of IEEE International Conference on Computational Photography (ICCP), IEEE, 2020
 

Abstract

Image-based appearance measurements are fundamentally limited in spatial resolution by the acquisition hardware. Due to the ever-increasing resolution of displaying hardware, high-resolution representations of digital material appearance are desireable for authentic renderings. In the present paper, we demonstrate that high-resolution bidirectional texture functions (BTFs) for materials can be obtained from low-resolution measurements using single-image convolutional neural network (CNN) architectures for image super-resolution. In particular, we show that this approach works for high-dynamic-range data and produces consistent BTFs, even though it operates on an image-by-image basis. Moreover, the CNN can be trained on down-sampled measured data, therefore no high-resolution ground-truth data, which would be difficult to obtain, is necessary. We train and test our method's performance on a large-scale BTF database and evaluate against the current state-of-the-art in BTF super-resolution, finding superior performance.

Result renderings
The videos below show path-traced renderings of the ground-truth (left) and our super-resolved (middle) BTFs (both uncompressed), as well as absolute difference images scaled up by 2 (right). Bright flashing spots in the difference images are due to firefly removal.

Images

Download Paper

Download Paper

Additional Material

Bibtex

@INPROCEEDINGS{denbrok2020iccp,
     author = {den Brok, Dennis and Merzbach, Sebastian and Weinmann, Michael and Klein, Reinhard},
      title = {Per-Image Super-Resolution for Material BTFs},
  booktitle = {IEEE International Conference on Computational Photography (ICCP)},
       year = {2020},
  publisher = {IEEE},
   abstract = {Image-based appearance measurements are fundamentally limited in spatial resolution by the
               acquisition hardware. Due to the ever-increasing resolution of displaying hardware, high-resolution
               representations of digital material appearance are desireable for authentic renderings. In the
               present paper, we demonstrate that high-resolution bidirectional texture functions (BTFs) for
               materials can be obtained from low-resolution measurements using single-image convolutional neural
               network (CNN) architectures for image super-resolution. In particular, we show that this approach
               works for high-dynamic-range data and produces consistent BTFs, even though it operates on an
               image-by-image basis. Moreover, the CNN can be trained on down-sampled measured data, therefore no
               high-resolution ground-truth data, which would be difficult to obtain, is necessary. We train and
               test our method's performance on a large-scale BTF database and evaluate against the current
               state-of-the-art in BTF super-resolution, finding superior performance.}
}