View-dependent Far-Field Level of Detail Rendering for Urban Models

In: Computer Graphics & Geometry (2008), 10:3
 

Abstract

Rendering of terrain meshes has been an active field of research for several years. Real-time, out-of-core terrain rendering systems based on a hierarchy of level of detail representations, in which each node contains a simplified version of the terrain geometry of its sub-nodes, are available today. Although providing good results at close view distances, these techniques are not well suited to the rendering of distant views of urban models. Once the error-threshold exceeds the building size, they are removed by the simplifier and only represented in the terrain texture. This results in a loss of viewing-direction dependent information and thus a change of the appearance of the city. To overcome this problem, we propose the use of surface light fields as a view-dependent, image based level of detail representation for far distances. We compare two compression techniques, the Per Cluster Factorization and the Linear Mode-3 Tensor Approximation, and we show that the surface light fields are more compact, can be rendered at higher frame rates, and show less aliasing artifacts than the geometry they represent.

The paper is available at http://www.cgg-journal.com/2008-3/index.htm

Images

Bibtex

@ARTICLE{ruiters-2008-view-dependent,
    author = {Ruiters, Roland},
     title = {View-dependent Far-Field Level of Detail Rendering for Urban Models},
   journal = {Computer Graphics {\&} Geometry},
    volume = {10},
    number = {3},
      year = {2008},
  abstract = {Rendering of terrain meshes has been an active field of research for several years. Real-time,
              out-of-core terrain rendering systems based on a hierarchy of level of detail representations, in
              which each node contains a simplified version of the terrain geometry of its sub-nodes, are
              available today. Although providing good results at close view distances, these techniques are not
              well suited to the rendering of distant views of urban models. Once the error-threshold exceeds the
              building size, they are removed by the simplifier and only represented in the terrain texture. This
              results in a loss of viewing-direction dependent information and thus a change of the appearance of
              the city. To overcome this problem, we propose the use of surface light fields as a view-dependent,
              image based level of detail representation for far distances. We compare two compression techniques,
              the Per Cluster Factorization and the Linear Mode-3 Tensor Approximation, and we show that the
              surface light fields are more compact, can be rendered at higher frame rates, and show less aliasing
              artifacts than the geometry they represent.}
}