View-dependent Far-Field Level of Detail Rendering for Urban Models

In: Central European Seminar on Computer Graphics for Students (CESCG 2008) (Apr. 2008)
 

Abstract

Rendering of terrain meshes has been an active field of research for several years. Real-time, out-of-core terrain rendering systems based on hierarchical level of detail representations created by continuously simplifying the terrain geometry are available today. Although providing good results at close view distances, these techniques are not well suited to the rendering of distant views of urban models. Once the error-threshold exceeds the building size, they are removed by the simplifier and only represented in the terrain texture. This results in a loss of viewing-direction dependent information and thus a change of the appearance of the city. To overcome this problem, we propose the use of surface light fields as a view-dependent, image based level of detail representa- tion for far distances. We compare two compression techniques, the Per Cluster Factorization and the Linear Mode- 3 Tensor Approximation, and we show that the surface light fields are more compact, can be rendered at higher frame rates, and show less aliasing artifacts than the geometry they represent.

Keywords: city rendering, level of detail, out-of-core rendering, surface light fields

Images

Download Paper

Download Paper

Bibtex

@ARTICLE{ruiters-2008-lod-rendering,
    author = {Ruiters, Roland},
     title = {View-dependent Far-Field Level of Detail Rendering for Urban Models},
   journal = {Central European Seminar on Computer Graphics for Students (CESCG 2008)},
      year = {2008},
     month = apr,
  keywords = {city rendering, level of detail, out-of-core rendering, surface light fields},
  abstract = {Rendering of terrain meshes has been an active field of research for several years. Real-time,
              out-of-core terrain rendering systems based on hierarchical level of detail representations created
              by continuously simplifying the terrain geometry are available today. Although providing good
              results at close view distances, these techniques are not well suited to the rendering of distant
              views of urban models. Once the error-threshold exceeds the building size, they are removed by the
              simplifier and only represented in the terrain texture. This results in a loss
              of viewing-direction dependent information and thus a change of the appearance of the city. To
              overcome this
              problem, we propose the use of surface light fields as a view-dependent, image based level of detail
              representa-
              tion for far distances. We compare two compression techniques, the Per Cluster Factorization and the
              Linear Mode-
              3 Tensor Approximation, and we show that the surface light fields are more compact, can be rendered
              at higher frame rates, and show less aliasing artifacts than the geometry they represent.}
}