• Gabriel Eilertsen defends PhD thesis
    07.06.2018: On Friday, June 6 at 09.15, Gabriel Eilertsen will defend his PhD thesis titled: The High Dynamic Range Imaging Pipeline - Tone Mapping, Distribution, and Single Exposure Reconstruction. The defense takes place in the dome theater at Norrköping Visualization Center.

  • Paper accepted to EUSIPCO 2018
    21.05.2018: Our paper titled Multi-Shot Single Sensor Light Field Camera Using a Color Coded Mask, authored by Ehsan Miandji, Jonas Unger, and Christine Guillemot is accepted to EUSIPCO 2018. The project was done in collaboration with INRIA, Rennes, France. 

  • Paper accepted for Elsevier Digital Signal Processing
    21.05.2018: Our paper titled OMP-Based DOA Estimation Performance Analysis, authored by Ehsan Miandji, Mohammad Emadi, and Jonas Unger is accepted for publication in Elsevier Digital Signal Processing. In this paper we present a theoretical analysis for DOA estimation using the OMP algorithm.

  • Paper published in IEEE Signal Processing Letters
    21.05.2018: Our paper titled On Probability of Support Recovery for Orthogonal Matching Pursuit Using Mutual Coherence is now published in IEEE Signal Processing Letters. The paper is authored by Ehsan Miandji, Mohammad Emadi, Jonas Unger, and Ehsan Afshari, as a collaboration between VCL, University of Michigan, and Qualcomm Inc.

  • VRST 2017
    03.11.2017: Jonas Unger and Oliver Staadt, Rostock University, are papers chairs for VRST 2017, the 23rd ACM Symposium on Virtual Software and Technology heldin Gothenburg, Sweden November 8-10. VRST is an international forum for the exchange of experience and knowledge among researchers and developers in virtual and augmented reality (VR/AR) technology. VRST provides an opportunity for VR/AR researchers to interact, share new results, show live demonstrations of their work, and discuss emerging directions for the field. The event is sponsored by ACM SIGCHI and SIGGRAPH.

  • Paper accepted to SIGGRAPH Asia 2017
    25.10.2017: Our paper HDR image reconstruction from a single exposure using deep CNNs authored by Gabriel Eilertsen, Joel Kronander, Gyorgy Denes, Rafal K. Mantiuk, and Jonas Unger has been accepted for publication at SIGGRAPH Asia 2017 in Bangkok, Thailand. 

  • BxDF4CV ICCV workshop
    22.10.2017: We are co-organizing the BxDF4CV workshop at ICCV in Venice, Italy togehter with Technical University of Denmark and Eberhard Karls Universität Tübingen. The workshop is held October 23rd and will have both paper presentations and two distinguished invited speakers: Paul Debevec who is a senior staff researcher at Google VR and adjunct research professor at the University of Southern California’s Institute for Creative Technologies and Manmohan Chandraker from NEC Labs America in Cupertino.

  • Vinnova funds new project
    17.10.2017: Vinnova funds our new project in which we are developing new methods for generating synthetic training data for machine learning and deep learning. In the project, we use procedural world modeling and physically based image synthesis techniques to generate visual data with corresponding pixel accurate ground thruth annotations for automotive applications. The project is a collaboration with 7DLabs (USA). 

  • Materials and Technology for a Digital Future
    31.08.2017: Jonas Unger is invited speaker at the jubilee symposium Materials and Technology for a Digital Future for the Knut and Alice Wallenberg Foundation's 100 years anniversary. The symposium is held September 13 at 09.00 - 17.00 in lecture hall K4 in Kåkenhus at Campus Norrköping, Linköping University.

  • Paper accepted to ICCV workshop
    30.08.2017: Our paper Efficient BRDF Sampling Using Projected Deviation Vector Parameterization by Tanaboon Tongbuasirilai, Murat Kurt, and Jonas Unger has been accepted to the ICCV workshop BxDF4CV.

  • Paper accepted to SCIA 2017
    29.04.2017: Our paper BriefMatch: Dense binary feature matching for real-time optical flow estimation by Gabriel Eilertsen, Per-Erik Forssén and Jonas Unger has been accepted to SCIA 2017 held in Tromsö, Norway.

  • Presentation SSBA 2017
    15.03.2017: Jonas Unger is giving a talk on our project on image syntehsis for machine learning at SSBA 2017.

  • Our STAR on HDR-video tone mapping has been accepted to Eurographics 2017
    11.02.2017: Our state-of-the-art-report titled A comparative review of tone-mapping algorithms for high dynamic range video by Gabriel Eilertsen, Rafal K. Mantiuk, and Jonas Unger has been accepted to Eurographics 2017.

  • Three papers at ICIP 2016
    27.09.2016: We are presenting three papers at ICIP 2016 in Phoenix. The first paper is: On nonlocal image completion using an ensemble of dictionaries by Ehsan Miandji and Jonas Unger. The second paper is: Real-time noise-aware tone-mapping and its use in luminance retargeting by Gabriel Eilertsen, Rafal K. Mantiuk and Jonas Unger. The third paper is Luma HDRv: an open source high dynamic range video codec optimized by large-scale testing by Gabriel Eilertsen, Rafal K. Mantiuk and Jonas Unger.

  • Two research overview presentations at SIGRAD 2016

    We have two presentations at SIGRAD 2016 in Visby, Sweden.

    Ehsan Miandji will give an overview of our work in sparse basis representations for image and light field reconstruction and compressive signal reconstruction. Focus will be put on the EG 2015 paper  Compressive Image Reconstruction in Reduced Union of Subspaces by Ehsan Miandji, Joel Kronander, and Jonas Unger. 

    Gabriel Eilertsen will give an overview of our projects in high dynamic range (HDR) videotone mapping of HDR video, and our recent SIGGRAPH Asia 2015 paper: Real-time noise-aware tone mapping by Gabriel Eilertsen, Rafal K. Mantiuk, and Jonas Unger.

  • Paper accepted to CVPR workshop on Computational Cameras and Displays 2016
    17.05.2016: Our paper Time-offset Converstaions on a Life-Sized Automultiscopic Projector Array by Andrew Jones, Jonas Unger, Koki Nagano, Jay Busch, Xueming Yu, Hsuan-Yueh Peng, Joseph Barreto, Oleg Alexander, Mark Bolas, and Paul Debevec has been accepted for publication at the CVPR workshop Computational Cameras and Displays 2016. The project was carried out in collaboration with Institute for Creative Technologies at University of Southern California. Read more about the project here

  • Two talks accepted to SIGGRAPH 2016
    02.05.2016: Two talks have been accepted to SIGRAPH 2016 in Anaheim. The first talk: Differential apperance editing of measured BRDFs, by Apostilia Tsirikoglou, Joel Kronander, Per Larsson, Tanaboon Tongbuasirilai, Andrew Gardner, and Jonas Unger, describes a method for intuitive editing of measured BRDFs. The second talk: Luma HDRv: an open source high dynamic range video codec optimized by large scale testing, by Gabriel Eilertsen, Rafal K. Mantiuk, and Jonas Unger describes the design choices and implementation of our recently released open source HDR video codec Luma HDRv.

  • Funding from Norrköpings fond för forskning och utveckling
    02.05.2016: Norrköpings fond för forskning och utveckling will fund our new project Digitala verktyg för en levande historia, which will develop new algorithms and tools for appearance capture and modelling. The project will run 2016 - 2017 and is a collaboration with NVAB, Norrköpings Stadsmuseum, Livrustkammaren, Skokloster and Hallwylska museet.

  • New book on HDR-video

    Our new  book High Dynamic Range Video - From acquisition to display and applications is now out. The computer graphics and image processing group has contributed with two chapters. Chapter 2: Unified reconstruction of Raw HDR Video Data, Jonas UngerJoel Kronander and Saghi Hajisharif. Chapter 7: Evaluation of Tone Mapping Operators for HDR-video, Gabriel EilertsenRafał Mantiuk, and Jonas Unger.  

  • Eurographics 2016 tutorial on High Dynamic Range Video

    Our Eurographics 2016 tutorial "The HDR-video pipeline", by Jonas Unger,  Francesco Banterle, Gabriel Eilertsen and Rafal Mantiuk has been accepted.

    Abstract: High dynamic range (HDR) video technology has gone through remarkable developments over the last few years; HDR-video cameras are being commercialized, new algorithms for color grading and tone mapping specifically designed for HDR-video have recently been proposed, and the first open source compression algorithms for HDR- video are becoming available. HDR-video represents a paradigm shift in imaging and computer graphics, which has and will continue to generate a range of both new research challenges and applications. This intermediate- level tutorial will give an in-depth overview of the full HDR-video pipeline present several examples of state-of- the-art algorithms and technology in HDR-video capture, tone mapping, compression and specific applications in computer graphics.