News:
  • Paper accepted to SCIA 2017
    29.04.2017: Our paper BriefMatch: Dense binary feature matching for real-time optical flow estimation by Gabriel Eilertsen, Per-Erik Forssén and Jonas Unger has been accepted to SCIA 2017 held in Tromsö, Norway.


  • Presentation SSBA 2017
    15.03.2017: Jonas Unger is giving a talk on our project on image syntehsis for machine learning at SSBA 2017.


  • Our STAR on HDR-video tone mapping has been accepted to Eurographics 2017
    11.02.2017: Our state-of-the-art-report titled A comparative review of tone-mapping algorithms for high dynamic range video by Gabriel Eilertsen, Rafal K. Mantiuk, and Jonas Unger has been accepted to Eurographics 2017.


  • Three papers at ICIP 2016
    27.09.2016: We are presenting three papers at ICIP 2016 in Phoenix. The first paper is: On nonlocal image completion using an ensemble of dictionaries by Ehsan Miandji and Jonas Unger. The second paper is: Real-time noise-aware tone-mapping and its use in luminance retargeting by Gabriel Eilertsen, Rafal K. Mantiuk and Jonas Unger. The third paper is Luma HDRv: an open source high dynamic range video codec optimized by large-scale testing by Gabriel Eilertsen, Rafal K. Mantiuk and Jonas Unger.


  • Two research overview presentations at SIGRAD 2016
    23.05.2016:

    We have two presentations at SIGRAD 2016 in Visby, Sweden.

    Ehsan Miandji will give an overview of our work in sparse basis representations for image and light field reconstruction and compressive signal reconstruction. Focus will be put on the EG 2015 paper  Compressive Image Reconstruction in Reduced Union of Subspaces by Ehsan Miandji, Joel Kronander, and Jonas Unger. 

    Gabriel Eilertsen will give an overview of our projects in high dynamic range (HDR) videotone mapping of HDR video, and our recent SIGGRAPH Asia 2015 paper: Real-time noise-aware tone mapping by Gabriel Eilertsen, Rafal K. Mantiuk, and Jonas Unger.



  • Paper accepted to CVPR workshop on Computational Cameras and Displays 2016
    17.05.2016: Our paper Time-offset Converstaions on a Life-Sized Automultiscopic Projector Array by Andrew Jones, Jonas Unger, Koki Nagano, Jay Busch, Xueming Yu, Hsuan-Yueh Peng, Joseph Barreto, Oleg Alexander, Mark Bolas, and Paul Debevec has been accepted for publication at the CVPR workshop Computational Cameras and Displays 2016. The project was carried out in collaboration with Institute for Creative Technologies at University of Southern California. Read more about the project here


  • Two talks accepted to SIGGRAPH 2016
    02.05.2016: Two talks have been accepted to SIGRAPH 2016 in Anaheim. The first talk: Differential apperance editing of measured BRDFs, by Apostilia Tsirikoglou, Joel Kronander, Per Larsson, Tanaboon Tongbuasirilai, Andrew Gardner, and Jonas Unger, describes a method for intuitive editing of measured BRDFs. The second talk: Luma HDRv: an open source high dynamic range video codec optimized by large scale testing, by Gabriel Eilertsen, Rafal K. Mantiuk, and Jonas Unger describes the design choices and implementation of our recently released open source HDR video codec Luma HDRv.


  • Funding from Norrköpings fond för forskning och utveckling
    02.05.2016: Norrköpings fond för forskning och utveckling will fund our new project Digitala verktyg för en levande historia, which will develop new algorithms and tools for appearance capture and modelling. The project will run 2016 - 2017 and is a collaboration with NVAB, Norrköpings Stadsmuseum, Livrustkammaren, Skokloster and Hallwylska museet.


  • New book on HDR-video
    16.04.2016:

    Our new  book High Dynamic Range Video - From acquisition to display and applications is now out. The computer graphics and image processing group has contributed with two chapters. Chapter 2: Unified reconstruction of Raw HDR Video Data, Jonas UngerJoel Kronander and Saghi Hajisharif. Chapter 7: Evaluation of Tone Mapping Operators for HDR-video, Gabriel EilertsenRafał Mantiuk, and Jonas Unger.  



  • Eurographics 2016 tutorial on High Dynamic Range Video
    14.03.2016:

    Our Eurographics 2016 tutorial "The HDR-video pipeline", by Jonas Unger,  Francesco Banterle, Gabriel Eilertsen and Rafal Mantiuk has been accepted.

    Abstract: High dynamic range (HDR) video technology has gone through remarkable developments over the last few years; HDR-video cameras are being commercialized, new algorithms for color grading and tone mapping specifically designed for HDR-video have recently been proposed, and the first open source compression algorithms for HDR- video are becoming available. HDR-video represents a paradigm shift in imaging and computer graphics, which has and will continue to generate a range of both new research challenges and applications. This intermediate- level tutorial will give an in-depth overview of the full HDR-video pipeline present several examples of state-of- the-art algorithms and technology in HDR-video capture, tone mapping, compression and specific applications in computer graphics.

     



  • Our paper on Adaptive dualISO HDR-reconstruction has been published
    10.12.2015:
    Our paper: Adaptive dualISO HDR-reconstruction authored by Saghi Hajisharif, Joel Kronander, and Jonas Unger is now published and available online at the EURASIP Journal of Imaging and Video Processing web page.


  • PhD defense - Joel Kronander - December 4th
    26.11.2015: Joel Kronander will defend his PhD thesis: Physically based rendering of synthetic objects in real environments. The thesis defense will take place in the dome theater at Norrköping Visualization Center, December 4th at 09.15. Jonas Unger has been the thesis supervisor, and Gregory Ward will be the opponent during the defense.


  • Research grant from the Science Council (VR)
    20.11.2015: Our proposal on Monte Carlo image synthesis will receive funding from the Science Council (VR). The project is led by Jonas Unger, and will be carried out in collaboration with Prof. Thomas Schön at Uppsala University.


  • Our open source HDR-video codec Luma HDRv is now online
    31.10.2015: The Luma HDRv codec uses a perceptually motivated method to store high dynamic range (HDR) video with a limited number of bits. The method, as originally described in the HDR Extension for MPEG-4, ensures that the quantized video stream will be visually indistinguishable from the input HDR. The stream is then compressed using Google's VP9 video codec, which provides a 12 bit encoding depth. Read more and download the open source libraries at the project we page.


  • Capturing reality for computer graphics applications
    10.11.2015: The project web page for our SIGGRAPH ASIA 2015 course: Capturing reality for computer graphics applications web page is online. Here you can find the coures notes, and all lidar scans, HDR light probes and HDR backdrop images used in the examples in the course.


  • Paper accepted to Siggraph Asia 2015
    18.09.2015: Our paper: Real-time noise-aware tone mapping by Gabriel Eilertsen, Rafal K. Mantiuk, and Jonas Unger has been accepted for publication at SIGGRAPH Asia 2015. Read more at the project web page.


  • Technical brief accepted to Siggraph Asia 2015
    18.09.2015: Our technical brief Pseudo-marginal Metropolis Light Transport, by Joel Kronander, Thomas B. Schön, and Jonas Unger has been accepted to SIGGRAPH Asia 2015. 


  • Siggraph 2015 E-Tech
    12.08.2015:

    An Auto-Multiscopic Projector Array for Interactive Digital Humans developed by Andrew Jones, Jonas Unger, Koki Nagano, Jay Busch, Xueming Yu, Hsuan-Yueh Peng, Oleg Alexander, and Paul Debevec, is on display at Siggraph 2015 Emerging Technologies in Los Angeles.



  • LiU and ITN in billion-SEK investment in autonomous systems
    30.05.2015: Linköping University coordinates a 1.8 billion SEK grant from the Knut and Alice Wallenberg (KAW) foundation within the area of  autonomous systems. This 10 year strategic effort is carried out in collaboration between LiU, KTH, Chalmers and Lund University. Read more here.


  • MIST Satellite imaging pipeline
    07.05.2015: The Computer Graphics and Image Processing group and the Visual Computing Laboratory are building the imaging pipeline for the  MIST satellite, which will be launched in 2017. In this project we are employing our statistical image framework for accurate image recosntruction and sparse image representations for compression. For more information about the MIST satellite, please see the MIST cubesat satellite blog.