Procedural Modeling and Physically Based Rendering for Synthetic Data Generation in Automotive Applications

Technical Report , Linköping University - 2017
Download the publication : ProceduralModelingandPhysicallyBasedRenderingforSyntheticDataGenerationinAutomotiveApplications.pdf [29.8Mo]  
We present an overview and evaluation of a new, systematic approach for generation of highly realistic, annotated synthetic data for training of deep neural networks in computer vision tasks. The main contribution is a procedural world modeling approach enabling high variability coupled with physically accurate image synthesis, and is a departure from the hand-modeled virtual worlds and approximate image synthesis methods used in real-time applications. The benefits of our approach include flexible, physically accurate and scalable image synthesis, implicit wide coverage of classes and features, and complete data introspection for annotations, which all contribute to quality and cost efficiency. To evaluate our approach and the efficacy of the resulting data, we use semantic segmentation for autonomous vehicles and robotic navigation as the main application, and we train multiple deep learning architectures using synthetic data with and without fine tuning on organic (i.e. real-world) data. The evaluation shows that our approach improves the neural network’s performance and that even modest implementation efforts produce state-of-the-art results.

Images and movies


See also

Check out also our publication on arXiv.

BibTex references

  author       = "Tsirikoglou, Apostolia and Kronander, Joel and Wrenninge, Magnus and Unger, Jonas",
  title        = "Procedural Modeling and Physically Based Rendering for Synthetic Data Generation in Automotive Applications",
  institution  = "Linköping University ",
  year         = "2017"

Author publication list