Per-Image Super-Resolution for Material BTFs
Abstract
Image-based appearance measurements are fundamentally limited in spatial resolution by the acquisition hardware. Due to the ever-increasing resolution of displaying hardware, high-resolution representations of digital material appearance are desireable for authentic renderings. In the present paper, we demonstrate that high-resolution bidirectional texture functions (BTFs) for materials can be obtained from low-resolution measurements using single-image convolutional neural network (CNN) architectures for image super-resolution. In particular, we show that this approach works for high-dynamic-range data and produces consistent BTFs, even though it operates on an image-by-image basis. Moreover, the CNN can be trained on down-sampled measured data, therefore no high-resolution ground-truth data, which would be difficult to obtain, is necessary. We train and test our method's performance on a large-scale BTF database and evaluate against the current state-of-the-art in BTF super-resolution, finding superior performance.
Result renderings
The videos below show path-traced renderings of the ground-truth (left) and our super-resolved (middle) BTFs (both uncompressed), as well as absolute difference images scaled up by 2 (right). Bright flashing spots in the difference images are due to firefly removal.
Images
![]() | ![]() |
Download Paper
Additional Material
- Fabric05 (left: GT, middle: OURS, right: difference x 2) (MPEG-4 video, 2.3 MB)
- Fabric09 (left: GT, middle: OURS, right: difference x 2) (MPEG-4 video, 2.2 MB)
- Leather08 (left: GT, middle: OURS, right: difference x 2) (MPEG-4 video, 2.0 MB)
- Leather09 (left: GT, middle: OURS, right: difference x 2) (MPEG-4 video, 1.7 MB)
Bibtex
@INPROCEEDINGS{denbrok2020iccp, author = {den Brok, Dennis and Merzbach, Sebastian and Weinmann, Michael and Klein, Reinhard}, title = {Per-Image Super-Resolution for Material BTFs}, booktitle = {IEEE International Conference on Computational Photography (ICCP)}, year = {2020}, publisher = {IEEE}, abstract = {Image-based appearance measurements are fundamentally limited in spatial resolution by the acquisition hardware. Due to the ever-increasing resolution of displaying hardware, high-resolution representations of digital material appearance are desireable for authentic renderings. In the present paper, we demonstrate that high-resolution bidirectional texture functions (BTFs) for materials can be obtained from low-resolution measurements using single-image convolutional neural network (CNN) architectures for image super-resolution. In particular, we show that this approach works for high-dynamic-range data and produces consistent BTFs, even though it operates on an image-by-image basis. Moreover, the CNN can be trained on down-sampled measured data, therefore no high-resolution ground-truth data, which would be difficult to obtain, is necessary. We train and test our method's performance on a large-scale BTF database and evaluate against the current state-of-the-art in BTF super-resolution, finding superior performance.} }