Harvest4D - Harvesting Dynamic 3D Worlds from Commodity Sensor Clouds



The current acquisition pipeline for visual models of 3D worlds is based on a paradigm of planning a goal-oriented acquisition – sampling on site – processing. The digital model of an artifact (an object, a building, up to an entire city) is produced by planning a specific scanning campaign, carefully selecting the (often costly) acquisition devices, performing the on-site acquisition at the required resolution and then post-processing the acquired data to produce a beautified triangulated and textured model. However, in the future we will be faced with the ubiquitous availability of sensing devices that deliver different data streams that need to be processed and displayed in a new way, for example smartphones, commodity stereo cameras, cheap aerial data acquisition devices, etc.

We therefore propose a radical paradigm change in acquisition and processing technology: instead of a goal-driven acquisition that determines the devices and sensors, we let the sensors and resulting available data determine the acquisition process. Data acquisition might become incidental to other tasks that devices/People to which sensors are attached carry out. A variety of challenging problems need to be solved to exploit this huge amount of data, including: dealing with continuous streams of time-dependent data, finding means of integrating data from different sensors and modalities, detecting changes in data sets to create 4D models, harvesting data to go beyond simple 3D geometry, and researching new paradigms for interactive inspection capabilities with 4D data sets. In this project, we envision solutions to these challenges, paving the way for affordable and innovative uses of information technology in an evolving world sampled by ubiquitous visual sensors.

Our approach is high-risk and an enabling factor for future visual applications. The focus is clearly on Basic research questions to lay the foundation for the new paradigm of incidental 4D data capture.


In: Computers & Graphics (Nov. 2017), 68(21-31)
In proceedings of Eurographics Workshop on Material Appearance Modeling, pages 27-34, The Eurographics Association, 2016
Tamy Boubekeur, Paolo Cignoni, Elmar Eisemann, Michael Goesele, Reinhard Klein, Stefan Roth, Michael Weinmann, and Michael Wimmer
In proceedings of Eurographics Workshop on Graphics and Cultural Heritage, pages 19-22, The Eurographics Association, 2016
In: Computers & Graphics (Feb. 2016), 54(94-103)
In proceedings of Proceedings of the Eurographics Workshop on Material Appearance Modeling, pages 35-42, Eurographics, 2015
In proceedings of SIGGRAPH Asia 2015 Courses, pages 1:1-1:71, ACM, 2015
In proceedings of SIGGRAPH Asia 2015 Courses, ACM, 2015
Jun Li, Weiwei Xu, Zhiquan Cheng, Kai Xu, and Reinhard Klein
In: Computer Aided Design (2015), 58(117-122)
In proceedings of Eurographics Workshop on Material Appearance Modeling, pages 15-20, The Eurographics Association, 2015
In proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015
In proceedings of ACM SIGGRAPH Symposium on Applied Perception, Tübingen, Germany, pages 33-40, ACM, Sept. 2015
Michael Weinmann, Juergen Gall, and Reinhard Klein
In proceedings of Computer Vision - ECCV 2014 - 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part III, pages 156-171, Springer International Publishing, 2014
In proceedings of Vision, Modeling & Visualization, The Eurographics Association, Oct. 2014
In proceedings of Computer Vision - ECCV 2014 Workshops, Zurich, Switzerland, pages 321-333, Springer International Publishing, Sept. 2014
In: Proceedings of the International Conference on Computer Vision (Dec. 2013)(2504-2511)
B. Levy, X. Tong, and K. Yin (Editors)
In: Computer Graphics Forum (Proc. of Pacific Graphics) (Oct. 2013), 32:7(345-354)