Professor Dr.

Reinhard Klein

Head of Computer Graphics Group
 
Friedrich-Ebert-Allee 144, Room
D-53113 Bonn
Germany
I.55
Phone: +49 (0) 228 73-4201
Fax: +49 (0) 228 73-4212
Email: rk@REMOVETHISPART.cs.uni-bonn.de

Reinhard Klein studied Mathematics and Physics at the externUniversity of Tübingen, Germany, from where he received his MS in Mathematics (Dipl.-Math.) in 1989 and his PhD in computer science in 1995. In 1999 he received an appointment as lecturer ("Habilitation") in computer science also from the University of Tübingen, with a thesis in computer graphics. In September 1999 he became an Associate Professor at the externUniversity of Darmstadt, Germany and head of the research group externAnimation and Image Communication at the externFraunhofer Institute for Computer Graphics. Since October 2000 he is professor at the University of Bonn and director of the Institute of Computer Science II.

Courses

Ongoing Projects

logo=stoff.png 
In this project we work on the analysis, synthesis and resynthesis of optical material properties of cloth. By estimating domain specific parameters like the weaving pattern and yarn reflection properties from images we obtain a cloth model which can both be visually resynthesized and intuitively edited. We develop new techniques in the context of physically based rendering and image analysis of cloth.
logo=BTFDBB_Logo_200x160.jpg 
The image-based acquisition of complex optical material properties is one of the major research topics in our group. The goal of this project is the development of novel techniques for the efficient and high-fidelity capture of high-dimensional material representations like, e.g. the bidirectional texture function (BTF). Example data is publicy available at the BTF database Bonn.
logo=PerceptualGraphics.png 
In this project we strive to derive a statistical model of the space spanned by a database of measured BTFs. This way, we intend to develop a dramatically more general representation of materials than is currently available. The goal is to reparameterize the high-dimensional material space to allow perceptually meaningful interpolations between the acquired samples, i.e., to generate new materials that blend qualities of samples from the dataset.
logo=Harvest4D_B_RGB.jpg 
Instead of a goal-driven acquisition that determines the devices and sensors, we let the sensors and resulting available data determine the acquisition process. Data acquisition might become incidental to other tasks that devices/People to which sensors are attached carry out. A variety of challenging problems need to be solved to exploit this huge amount of data, including: dealing with continuous streams of time-dependent data, finding means of integrating data from different sensors and modalities, detecting changes in data sets to create 4D models, harvesting data to go beyond simple 3D geometry, and researching new paradigms for interactive inspection capabilities with 4D data sets. In this project, we envision solutions to these challenges, paving the way for affordable and innovative uses of information technology in an evolving world sampled by ubiquitous visual sensors.
logo=MoDEnv200.png 
Our group deals with the efficient representation, management and visualization of 3D surface data that gets captured incrementally by an autonomously flying drone. This data will be integrated into a global 3D map.
logo=shape_detection.png 
3D acquisition devices usually produce unstructured point-clouds as primary output. A challenge in this context is the decomposition of the point-cloud data into known parts in order to introduce abstractions of the originally unorganized data. This information can be used for compression, recognition and reconstruction.
logo=shape-logo.png 
In this project an interactive visual approach to shape analysis of 3D structures is taken. As concrete application serves here the analysis of the skull morphology of European mice and rats based on high-resolution 3D scans.
logo=LogoSpectral.PNG 
To correctly simulate materials under arbitrary illumination, the light simulation in a virtual scene must be calculated on a pure spectral basis. This is already done in modern rendering systems. For a few classes of materials spectral reflectance data is already acquired for a few light and view directions using spectrometers and gonioreflectometer setups. This is sometimes enough to fit analytical models to the measured data. But for anisotropic materials or for materials with strong variations in angular or spatial domain there are currently no measurement setups at hand. Similar setups like the ones based on RGB CCD cameras are impractical for spectral measurements because of the high costs of cameras and light sources needed for spectral measurements. In this project we plan to combine RGB and spectral measurement methods to come up with an efficient and pratical measurement setup for spectral BTFs. Furthermore, algorithm for analysis, compression and efficient rendering for such RGB-spectral-combined data will be investigated.

Completed Projects