Scalable Realtime Rendering and Interaction with Digital Surface Models of Landscapes and Cities

Universität Bonn, 2016
 

Abstract

Interactive, realistic rendering of landscapes and cities differs substantially from classical terrain rendering. Due to the sheer size and detail of the data which need to be processed, realtime rendering (i.e. more than 25 images per second) is only feasible with level of detail (LOD) models. Even the design and implementation of efficient, automatic LOD generation is ambitious for such out-of-core datasets considering the large number of scales that are covered in a single view and the necessity to maintain screen-space accuracy for realistic representation. Moreover, users want to interact with the model based on semantic information which needs to be linked to the LOD model. In this thesis I present LOD schemes for the efficient rendering of 2.5d digital surface models (DSMs) and 3d point-clouds, a method for the automatic derivation of city models from raw DSMs, and an approach allowing semantic interaction with complex LOD models. The hierarchical LOD model for digital surface models is based on a quadtree of precomputed, simplified triangle mesh approximations. The rendering of the proposed model is proved to allow real-time rendering of very large and complex models with pixel-accurate details. Moreover, the necessary preprocessing is scalable and fast. For 3d point clouds, I introduce an LOD scheme based on an octree of hybrid plane-polygon representations. For each LOD, the algorithm detects planar regions in an adequately subsampled point cloud and models them as textured rectangles. The rendering of the resulting hybrid model is an order of magnitude faster than comparable point-based LOD schemes. To automatically derive a city model from a DSM, I propose a constrained mesh simplification. Apart from the geometric distance between simplified and original model, it evaluates constraints based on detected planar structures and their mutual topological relations. The resulting models are much less complex than the original DSM but still represent the characteristic building structures faithfully. Finally, I present a method to combine semantic information with complex geometric models. My approach links the semantic entities to the geometric entities on-the-fly via coarser proxy geometries which carry the semantic information. Thus, semantic information can be layered on top of complex LOD models without an explicit attribution step. All findings are supported by experimental results which demonstrate the practical applicability and efficiency of the methods.

Download: http://hss.ulb.uni-bonn.de/2016/4475/4475.htm

Bibtex

@PHDTHESIS{wahl-2016-phd,
    author = {Wahl, Roland},
     title = {Scalable Realtime Rendering and Interaction with Digital Surface Models of Landscapes and Cities},
      year = {2016},
    school = {Universit{\"a}t Bonn},
  abstract = {Interactive, realistic rendering of landscapes and cities differs substantially from classical
              terrain rendering. Due to the sheer size and detail of the data which need to be processed, realtime
              rendering (i.e. more than 25 images per second) is only feasible with level of detail (LOD) models.
              Even the design and implementation of efficient, automatic LOD generation is ambitious for such
              out-of-core datasets considering the large number of scales that are covered in a single view and
              the necessity to maintain screen-space accuracy for realistic representation. Moreover, users want
              to interact with the model based on semantic information which needs to be linked to the LOD model.
              In this thesis I present LOD schemes for the efficient rendering of 2.5d digital surface models
              (DSMs) and 3d point-clouds, a method for the automatic derivation of city models from raw DSMs, and
              an approach allowing semantic interaction with complex LOD models.
              The hierarchical LOD model for digital surface models is based on a quadtree of precomputed,
              simplified triangle mesh approximations. The rendering of the proposed model is proved to allow
              real-time rendering of very large and complex models with pixel-accurate details. Moreover, the
              necessary preprocessing is scalable and fast.
              For 3d point clouds, I introduce an LOD scheme based on an octree of hybrid plane-polygon
              representations. For each LOD, the algorithm detects planar regions in an adequately subsampled
              point cloud and models them as textured rectangles. The rendering of the resulting hybrid model is
              an order of magnitude faster than comparable point-based LOD schemes.
              To automatically derive a city model from a DSM, I propose a constrained mesh simplification. Apart
              from the geometric distance between simplified and original model, it evaluates constraints based on
              detected planar structures and their mutual topological relations. The resulting models are much
              less complex than the original DSM but still represent the characteristic building structures
              faithfully.
              Finally, I present a method to combine semantic information with complex geometric models. My
              approach links the semantic entities to the geometric entities on-the-fly via coarser proxy
              geometries which carry the semantic information. Thus, semantic information can be layered on top of
              complex LOD models without an explicit attribution step.
              All findings are supported by experimental results which demonstrate the practical applicability and
              efficiency of the methods.},
       url = {http://hss.ulb.uni-bonn.de/2016/4475/4475.htm}
}