StreetGAN: Towards Road Network Synthesis with Generative Adversarial Networks

In proceedings of International Conference on Computer Graphics, Visualization and Computer Vision, 2017
 

Abstract

We propose a novel example-based approach for road network synthesis relying on Generative Adversarial Networks (GANs), a recently introduced deep learning technique. In a pre-processing step, we first convert a given representation of a road network patch into a binary image where pixel intensities encode the presence or absence of streets. We then train a GAN that is able to automatically synthesize a multitude of arbitrary sized street networks that faithfully reproduce the style of the original patch. In a post-processing step, we extract a graph-based representation from the generated images. In contrast to other methods, our approach does neither require domain-specific expert knowledge, nor is it restricted to a limited number of street network templates. We demonstrate the general feasibility of our approach by synthesizing street networks of largely varying style and evaluate the results in terms of visual similarity as well as statistical similarity based on road network similarity measures.

Images

Download Paper

Download Paper

Bibtex

@INPROCEEDINGS{hartmann-2017-StreetGAN,
     author = {Hartmann, Stefan and Weinmann, Michael and Wessel, Raoul and Klein, Reinhard},
      title = {StreetGAN: Towards Road Network Synthesis with Generative Adversarial Networks},
    journal = {International Conference on Computer Graphics,},
  booktitle = {International Conference on Computer Graphics, Visualization and Computer Vision},
       year = {2017},
   abstract = {We propose a novel example-based approach for road network synthesis relying on Generative
               Adversarial Networks (GANs), a recently introduced deep learning technique. In a pre-processing
               step, we first convert a given representation of a road network patch into a binary image where
               pixel intensities encode the presence or absence of streets. We then train a GAN that is able to
               automatically synthesize a multitude of arbitrary sized street networks that faithfully reproduce
               the style of the original patch. In a post-processing step, we extract a graph-based representation
               from the generated images. In contrast to other methods, our approach does neither require
               domain-specific expert knowledge, nor is it restricted to a limited number of street network
               templates. We demonstrate the general feasibility of our approach by synthesizing street networks of
               largely varying style and evaluate the results in terms of visual similarity as well as statistical
               similarity based on road network similarity measures.}
}