Here you can find a database of high quality measured materials collected by our group. In contrast to our BTF datasets, these materials are processed into more compact and efficient representations, namely spatially varying BRDFs (SVBRDFs). We use this dataset to improve the costly post-processing of the raw measurements, in particular the time-consuming SVBRDF fitting, which we successfully accelerated to seconds instead of hours using a deep learning approach (PDF), trained on our UBOFAB19 dataset.
SVBRDFs are obtained by fitting parameters of analytical BRDF models (e.g. the Ward BRDF) to the individual pixels of image-based measurements. The fitting is conventionally performed by non-linear optimization, which is heavily based on heuristics and therefore not universally applicable in all scenarios. But more importantly, the optimization requires long processing times on the order of serveral hours for one material, sometimes up to half a day. To add some numbers, the cumulative processing times for the presented UBOFAB19 dataset are 56 days on a single workstation.
We develop alternative approaches to the SVBRDF fitting problem with the help of deep learning. In a first work we were able to reduce the processing times from several hours to a few minutes per material.
To spark further research, we make the training data for our methods publicly available. The individual datasets can be found below:
Bonn Appearance Benchmark Sebastian Merzbach and Reinhard Klein The Eurographics Association, 2020 [Details] [Paper] |
Fabric Appearance Benchmark Sebastian Merzbach and Reinhard Klein In proceedings of Eurographics 2020 - Posters, The Eurographics Association, May 2020 [Details] [Paper] |
Real-time Multi-material Reflectance Reconstruction for Large-scale Scenes under Uncontrolled Illumination from RGB-D Image Sequences Lukas Bode, Sebastian Merzbach, Patrick Stotko, Michael Weinmann, and Reinhard Klein In proceedings of International Conference on 3D Vision (3DV), IEEE, pages 709-718, Sept. 2019 [Details] [Paper] |
Learned Fitting of Spatially Varying BRDFs Sebastian Merzbach, Max Hermann, Martin Rump, and Reinhard Klein In: Computer Graphics Forum (July 2019), 38:4 [Details] [Paper] |
Neural Appearance Synthesis and Transfer Holly Rushmeier and Reinhard Klein (Editors) Ilya Mazlov, Sebastian Merzbach, Elena Trunz, and Reinhard Klein In proceedings of Workshop on Material Appearance Modeling, The Eurographics Association, July 2019 [Details] [Paper] |
Using Moments to Represent Bounded Signals for Spectral Rendering Christoph Peters, Sebastian Merzbach, Johannes Hanika, and Carsten Dachsbacher In: ACM Transactions on Graphics (July 2019), 38:4 [Details] |
This is the fabric (release version 1) of the Bonn SVBRDFs dataset.
Update 2020/07/02: we have released new AxF SVBRDF fits with spatially varying Fresnel which can be downloaded with dedicated scripts; for your experiments, make sure to use mat????_svfresnel.axf
Update 2020/01/29: updated download links for Linux / Windows
Below you can download SVBRDFs obtained with X-Rite's Pantora fitting software, along with the underlying calibrated TAC7 measurements. All materials, both in the form of calibrated measurements, as well as the SVBRDF fits, are freely available for research purposes.
The TAC7 is an appearance scanner that is equipped with four panchromatic cameras, and as illumination, a hemisphere covered with 29 white point-like LED light sources, as well as a strip-like light source (called linear light source, LLS) that can be rotated to arbitrary inclination angles above a freely turnable sample holder. The latter optionally holds a diffuser plate that can be illuminated from below to capture translucency. To capture the surface geometry, there is a projector for structured light measurements. As the panchromatic cameras don't feature color filters themselves and thus are sensitive to photons from the entire visible spectrum, the color information needs to be captured by placing filters in front of five of the LED light sources. During acquisition, the TAC7 captures exposure series for each individual LED, both unfiltered and color-filtered, as well as the linear light source, which is rotated to various inclination angles above the sample. This is repeated for multiple turntable orientations. We classify the captured images into different modalities, according to their color and light-geometry types, i.e. panchromatic vs. polychromatic and point- vs. linear-light-source-lit images, where only some the point-lit images are captured as polychromatic images. This measurement strategy allows capturig various reflectance properties, including anisotropy, the Fresnel effect, and (partial) transparency.
All materials are captured using the anisotropic fabric preset in Pantora. This applies the following measurement strategy:
The linear light source granularity depends on an additional glossiness preset.
Due to space limitations, for now we can only provide the subsets that correspond to low glossiness for all materials (i.e. 14 individual steps; note that this also corresponds to the subset of the data that we used in our "Learned Fitting" paper).
The total number of resulting images per modality are:
panchromatic (pan): | 388 = 4 * 3 * 24 + 4 * 5 * 5 |
polychromatic (poly): | 100 = 4 * 5 * 5 |
linear light source (LLS): | 280 = 4 * 5 * 14 |
All images are radiometrically calibrated, i.e. all illumination spectra are calibrated out of the data. The same holds for any other illumination or observation effects like light falloff or lens vignetting. Those have been removed through white-frame calibration. Furthermore, the measured exposure series have been carefully combined into high dynamic range (HDR) images, which should contain no artifacts due to under- or overexposure. Polychromatic images are provided in linear sRGB color space under equal energy illuminant E. A conversion matrix between this RGB and the CIE XYZ color space is contained in each AxF file. Note that for each polychromatic image, there is also a corresponding panchromatic image, illuminated by an unfiltered white LED.
All post-processed TAC7 images contain undistorted pixels, i.e. the lens distortions of the optical systems are already calibrated out of the data. This has the great advantage that each pixel can be directly associated with a 3D point in world coordinates, and light sources and cameras can be assumed to be point-like (except for the linear light source, which is a simple elongated area light source), allowing for straight-forward computation of the light and view sampling directions for each pixel. Furthermore, the pixels are reprojected from all cameras onto the 3D surface as viewed from the top camera. This guarantees that pixels are in correspondence when viewed from all four cameras under the different turntable rotations.
Please note:
World coordinates are always specified in millimeters.
In some of the images, some pixels might be occluded due to the TAC7 device geometry.
These occlusions might be caused for very shallow view angles by the sample holder, or when the linear light source is near the zenith and in front of the top camera.
Occluded pixels are marked by setting them to -1.
Due to the large space requirements, the TAC7 measurement images are stored as compressed half-precision floating point images using the OpenEXR format.
They can be read using the OpenEXR library, which provides C/C++ and Python bindings, or with a simplified wrapper like pyexr.
Python implementations (see below) can be rather slow. We therefore highly recommend using the header-only tinyexr library by Syoyo Fujita, which can easily be used to speed up I/O dramatically, see e.g. this Matlab MEX solution.
The SVBRDFs on this website are provided in the Appearance Exchange Format (AxF), which can be viewed using the Pantora AxF Viewer. There is also an SDK that can be downloaded on request, which allows accessing the individual parameter maps and meta data, as well as evaluating the SVBRDFs. The materials are represented with the Geisler-Moroder variant of the anisotropic Ward BRDF model, extended with a Fresnel reflection term. A detailed description can be found in the AxF Whitepaper, as well as in the supplementary material of our paper below.
Please note that the provided SVBRDFs currently only have a spatially uniform Fresnel F0 coefficient.
For the more homogenous fabrics this is a valid simplification, but for some materials it is certainly oversimplified.
We plan to release all materials with spatially varying F0 in the future.
SVBRDF AxF files with the format mat0378.axf were fit with a spatially homogeneous Fresnel F0.
Those are the original SVBRDFs used in our work "Learned Fitting of Spatially Varying BRDFs" and are part of the original UBOFAB19 dataset.
You can download newer improved fits (mat0378_svfresnel.axf) using the scripts below, marked with "(SV Fresnel)", which are fit with a spatially varying Fresnel F0.
Those fits are released to match the SVBRDFs in the newer APPBENCH release and should be prefered over the spatially uniform F0 fits.
Here we provide Python code for loading all relevant data.
The external dependencies are:
import numpy as np import pyexr import scipy.io as spio # for visualizations only from matplotlib import pyplot as plt def loadmat(filename): """wrapper around scipy.io.loadmat that avoids conversion of nested matlab structs to np.arrays""" mat = spio.loadmat(filename, struct_as_record=False, squeeze_me=True) for key in mat: if isinstance(mat[key], spio.matlab.mio5_params.mat_struct): mat[key] = toDict(mat[key]) return mat def toDict(matobj): """construct python dictionary from matobject""" newDict = {} for fn in matobj._fieldnames: val = matobj.__dict__[fn] if isinstance(val, spio.matlab.mio5_params.mat_struct): newDict[fn] = toDict(val) else: newDict[fn] = val return newDict def normalize(dirs): return dirs / np.linalg.norm(dirs, axis=2)[:, :, None]
In the following, we will describe the meta and calibration data, as well as the actual calibrated images, contained in the different files exemplarily for material mat0003.
Material meta data contains material type, glossiness level (the preselection that influences the measurement strategy, essentially defining the LLS granularity), an averaged Fresnel F0 coefficient, pixel and physical dimensions, in mat0003.txt:
name: material 0003 fabric type: lycra glossiness: medium Fresnel F0: 0.1026 pixel dimensions: 1199 px x 1306 px physical dimensions: 8.20 cm x 8.90 cm
During the measurements, the sample is rotated under the cameras and light sources using the turntable. However, for evaluating the SVBRDFs, we use a different convention: We specify a turntable rotation of 0 degrees as a reference orientation, for which we provide the pixel world coordinates, and calculate light and view directions with respect to these pixel coordinates by rotating the light and camera positions in the reverse turntable direction.
Pixel world coordinates and the directional sampling are contained in:
mat0003_xyz_rot000.exr
mat0003_calibration.mat
The pixel world coordinates are stored in the three channels of the mat0003_xyz_rot000.exr file.
All other geometric calibration information is provided in a dictionary in the file mat0003_calibration.mat, where the keys indicate the turntable rotation in degrees (e.g. 'rot045').
Nested under these keys are dictionaries with keys for the individual LED or camera IDs (e.g. 'il09' and 'cv01'), as well as for the linear light source.
The linear light source emits from a rectangular diffuser, which is about 30 cm long and a few millimeters wide, and thus cannot be approximate with a single point-like sample as all the other LEDs, but requires sampling multiple light directions from a line, or to be more accurate, from the quad. We specify the used linear light source inclination angles in an array under the key "llsAnglesDegrees" and the corresponding corner positions in arrays under each turntable rotation stage with keys "llsCorners".
In Python, MAT files can be loaded using the scipy.io module in combination with the above auxiliary functions. Below is example code showing this for a combination of camera 1 and LED 9 under 45 degrees turntable rotation, as well as how to query the LLS corner coordinates (not shown is the sampling of directions towards the spanned LLS quad):
# load geometric calibration data calib = loadmat('mat0377_calibration.mat') xyz = pyexr.read('mat0003_xyz_rot000.exr') # camera 1 position for 45 degrees turntable rotation: V01 = calib['rot045']['cv01'] # LED 9 position for 45 degrees turntable rotation: L09 = calib['rot045']['il09'] # linear light source corner positions for 22 degrees LLS angle and turntable rotation 45 degrees: ind = np.nonzero(calib['llsAnglesDegrees'] == 22)[0][0] llsCorners022Degrees = calib['rot045']['llsCorners'][:, :, ind] # 3 x 4 array # compute normalized light and view directions for all pixels: L = normalize(L09[None, None, :] - xyz) # height x width x 3 array V = normalize(V01[None, None, :] - xyz) # height x width x 3 array # LLS directions should be sampled in the quad spanned by the 4 corners
Beware that the XYZ coordinates and light/camera positions are provided redundantly: We also provide the world coordinates of each pixel under each turntable rotation stage (rotation 0 in mat0003_xyz_rot000.exr, rotations 45, 90, 135 and 180 in mat0003_xyz_others.exr). So instead of our above convention, one could also compute per-pixel light and view directions, by loading the light and camera positions once for the reference orientation (i.e. for turntable rotation 0) and compute the direction vectors from the rotated pixel coordinates. Contrary to our convention, this requires loading multiple larger arrays for each of the five rotation stages, instead of only one.
Along with the above pixel coordinates, which are obtained by structured light measurements in the TAC7, we also specify coordinates for each pixel which have been refined during the Pantora fitting process.
These coordinates should not be considered as input for any kind of fitting procedure, as they result from the Pantora fits, and should instead by used for evaluation purposes.
They are provided in the same format as the unrefined pixel coordinates in the following files:
mat0003_xyz_refined_rot000.exr
mat0003_xyz_refined_others.exr
Polychromatic images are captured under the LEDs with indices 26, 27, 28, 31 and 32.
We provide the mapping from polychromatic to their corresponding panchromatic images as an array of 100 integers, where the i-th entry specifies the index of the panchromatic image corresponding to the i-th polychromatic image.
Please note that we use 1-based indexing here!
Panchromatic images can be obtained from polychromatic ones via weighted averaging of the three RGB channels.
The weights depend both on the camera and the LED index due to slight variations in spectral responses and emission spectra, and are thus stored in a dictionary with keys corresponding to combinations of camera and light indices (e.g. "cv01_il026").
The mapping and conversion weights are stored under the keys "poly2panIndices" and "poly2panWeights" in the file:
mat0003_calibration.mat
The undistorted and radiometrically calibrated HDR images are stored individually for the three modalities in the following files:
mat0003_pan.exr
mat0003_poly.exr
mat0003_lls.exr
These EXR files provide channel names that encode the camera and light indices (or the LLS inclination angle in degrees), as well as the turntable rotation in degrees. Together with the calibration data explained above, these indices can be used to reconstruct the directional sampling for each pixel in each image.
Below we provide example code for reading panchromatic and polychromatic images, as well as converting a polychromatic to a panchromatic one for comparison purposes:
# read polychromatic -> panchromatic calibration data poly2pan = loadmat('mat0003_poly2pan.mat') poly2panIndices = poly2pan['poly2panIndices'] poly2panWeights = poly2pan['poly2panWeights'] # read all polychromatic channels in IEEE 754 half precision float format poly = pyexr.open('mat0003_poly.exr') polyChannels = poly.channel_map['all'] poly = poly.get(group='all', precision=pyexr.HALF) # read all panchromatic channels in IEEE 754 half precision float format pan = pyexr.open('mat0003_pan.exr') panChannels = pan.channel_map['all'] pan = pan.get(group='all', precision=pyexr.HALF) # extract first polychromatic image print(polyChannels[:3]) poly_cv01_il026_rot000 = poly[:, :, :3] # get panchromatic image corresponding to first polychromatic image panInd = poly2panIndices[0] pan_cv01_il026_rot000 = pan[:, :, panInd - 1] # convert polychromatic to panchromatic image using the camera- and light-specific conversion weights pan_cv01_il026_rot000_converted = np.einsum('c,xyc->xy', poly2panWeights['cv01_il026'].astype(np.float16), poly_cv01_il026_rot000) # show first polychromatic image scale = 10 plt.figure() plt.subplot(1, 3, 1) plt.imshow(scale * poly_cv01_il026_rot000.astype(np.float32)) plt.title(polyChannels[0][:-2]) plt.subplot(1, 3, 2) plt.imshow(scale * pan_cv01_il026_rot000[:, :, None].repeat(3, axis=2).astype(np.float32), cmap=plt.get_cmap('gray')) plt.title(panChannels[panInd - 1]) plt.subplot(1, 3, 3) plt.imshow(scale * pan_cv01_il026_rot000_converted[:, :, None].repeat(3, axis=2).astype(np.float32), cmap=plt.get_cmap('gray')) plt.title('poly -> pan')
If you want to download individual SVBRDFs and / or measurements, you can browse the entire dataset below. You can find download links on the detail pages. Alternatively, we provide scripts for Linux / Windows that automatically download all files of the training and validation splits. The scripts rely on wget, which should be installed on Linux systems and can be downloaded for Windows. The materials are sorted by fabric type in each split, the different categories and their occurrences are listed for both splits and can be highlighted interactively.
If you make use of this dataset, please cite the following paper:
@article{merzbach2019learned, author = {Merzbach, Sebastian and Hermann, Max and Rump, Martin and Klein, Reinhard}, title = {Learned Fitting of Spatially Varying BRDFs}, journal = {Computer Graphics Forum}, volume = {38}, number = {4}, year = {2019}, month = jul, url = {https://cg.cs.uni-bonn.de/svbrdfs/} }
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
This is the fabric appearance benchmark of the Bonn SVBRDFs dataset.
It is being used in the Fabric Appearance Challenge.
Update 2020/06/03: fixed issue with polychromatic (matXXXX_poly.exr) images, please re-download if your files are older than this date!
Most details of the APPBENCH dataset are very similar to the UBOFAB19 dataset, please read its description first.
Likewise, we also relase such improved SVBRDFs for UBOFAB19 as a new revision of the dataset. This improves usability for methods trained to directly regress to the SVBRDF parameters, as the uniform Fresnel coefficients could lead to ambiguous training samples.
Below you can download SVBRDFs obtained with X-Rite's Pantora Material Hub fitting software, along with the underlying calibrated TAC7 measurements. All materials, both in the form of calibrated measurements, as well as the SVBRDF fits, are freely available for research purposes.
If you want to download individual SVBRDFs and / or measurements, you can browse the entire dataset below. You can find download links on the detail pages. Alternatively, we provide scripts for Linux / Windows that automatically download all files of the training and validation splits. The scripts rely on wget, which should be installed on Linux systems and can be downloaded for Windows. The materials are sorted by fabric type in each split, the different categories and their occurrences are listed for both splits and can be highlighted interactively.
If you make use of this dataset, please cite the following paper:
@inproceedings{merzbach2020appbench, author = {Merzbach, Sebastian and Klein, Reinhard}, title = {Fabric Appearance Benchmark}, booktitle = {Eurographics 2020 - Posters}, year = {2020}, month = may, publisher = {The Eurographics Association}, url = {https://cg.cs.uni-bonn.de/appbench/} }
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|