Research Projects


This page presents the main research projects that we are involved in. The main topics of our research projects are related to image capture and analysis, photo realistic rendering, image based lighting, HDR imaging and video, statistical methods for image retreival and data driven estimation and modeling of scene features such as geometry, materials and HDR lighting environments.

 

 Main Projects:


The Virtual Photo Set

 

 

Traditional photography is only one modality that can be used to represent reality. In this project we develop new methods which enables a richer documentation of scenes and a tool set which makes it possible to modify and augment the captured reality. The Virtual Photo Set (VPS) allows the users to blend real and virtual objects and generate photo realistic images in a way so that real and virtual objects are indistinguishable. The VPS is the next step in the on-going paradigm shift of the way in which we capture and document reality.

This project is motivated by the rapid growth of image-, sensor- and meta-data available on the internet, through databases, and capture/measurements. These huge data sets and input sources make it possible to analyze the world around us in new ways. The information (pixel data) in these images is, in its raw form, unstructured and contain large redundancies and ambiguities. In the VPS project, we are developing new methods for structuring these large data sets and algorithms for extracting semantic information from visually rich scenes.

Read more.


Depends - Workflow management for computer graphics and computational imaging

 


Many applications in graphics and imaging include complex processing input data from a variety of different sources. For example, in 3D-reconstruction of a scene we use data from laser scanners, high resolution images from DSLR cameras, and High Dynamic Range (HDR) video to record the scene. In order to recover the final model, we need to: align the laser scan point clouds, compute and align sparse 3D points from the images, estimate a mesh model from the registered point clouds, and calibrate and project the HDR-data onto the recovered model. This sequence of operations consists of a large number of different algorithms and optimization procedures operating on very large input data sets.

In this project, we are developing a workflow management system called Depends. Depends is a modular node based system which organizes the processing into a directed acyclic graph. Each node is a small software component that is responsible for e.g. controlling a capture device, data I/O, or a specific data processing or visualization task. The generality of the nodes and the system design enable the user to rapidly build a node network customized to the input data and task at hand. The workflow manager is built as a lightweight layer that controls the overall data flow, keeps track of assets in the system, and manages the execution of specific tasks. The system is designed so that a software component can be turned into a node in the workflow management system with little effort required by the developer. An important aspect of the system is that it communicates with existing 3D modelling software packages such as Autodesk's Maya or 3D-Studio Max.

Depends is an open-source project released under the BSD license. Read more or download the code at the project web page: http://www.dependsworkflow.net


 HDR Video - Capture, processing and display

 


In this project, we are developing hardware and algorithms for High Dynamic Range image capture and processing. A main theme within this project is HDR video. Recent results include a global shutter 4 MPixel HDR video camera capable of capturing images with a dynamic range in the order of 10,000,00:1 at 30 frames per second, and low level signal processing of the high bandwidth HDR data. The project is also investigating methods for displaying HDR video, and the use of HDR video for image based lighting purposes.

More information can be found at: www.hdrv.org

and at our CENIIT project web-page: High Dynamic Range Video with Applications


Evaluation of tone mapping for HDR-video

 

One of the main challenges in HDR imaging and video is to map the dynamic range of the HDR image and real world to the usually much smaller dynamic range of the display device. While an HDR image captured in a high contrast real scene often exhibit a dynamic range in the order of 5 to 10 log10 units a conventional display system is limited to a dynamic range in the order of 2 to 4 log10 units. The mapping of pixel values from an HDR image or video sequence to the display system is called tone mapping, and is carried out using a tone mapping operator (TMO). Most TMOs rely on models that simulate the human visual system. Over the last two decades, extensive research has been carried out and led to the development of a large number of TMOs. However, only a handful of them consider the temporal domain, that is for HDR-video content, in a consistent and robust way. This lack of HDR-video TMOs is likely due to the (very) limited availability of HDR-video footage. Until recently, artificial and static scenes have been the only material available. Recent development in HDR video capture, however, opens up new possibilities for advancing techniques in the area.
In order to display our HDRv sequences on ordinary monitors, projectors and TV-sets, we have made a thorough evaluation of exising techniques. Our study shows that the existing TMOs perform very differently and that new display algorithms need to be developed. The video below displays an example comparison between six different TMOs. 

 Read more.


Material measurement and modelling

 

Material properties such as reflectance, micro-structures and textures play an important role in the visual appearance of objects. In this project, we are developing methods and equipment for measuring these properties on everyday surfaces as well as mathematical models that can describe these properties in the context of photo-realistic image synthesis (computer graphics rendering).


Real-time Procedural Texturing

 

 

Procedural textures have been a standard tool in software rendering for decades, but it has only recently become applicable to real-time GPU rendering. Efforts in our group have resulted in GPU-friendly and fast implementations of Perlin noise and Worley noise, published as open source software. We have also been looking into methods for representing and rendering 2D shapes by distance fields, by improving on existing methods and inventing a new algorithm for anti-aliased Euclidean distance transform. Real-time procedural texturing is useful on a wide range of GPU hardware, from mobile phones to high-end desktop systems, and it is now ready for wide-spread adoption. Our efforts in this field continue, and we are teaching both undergraduate and graduate courses on the subject. Download real-time demo!


 Incident Light Fields

 

Images rendered using spatially varying real world illumination 

 

Within this project, we have extended Image Based Lighting (IBL), to allow for capture and rendering with spatially varying illumination. For this we capture sequences, (thousands), of High Dynamic Range (HDR) light probe images that are used to create HDR light fields that efficently describes the illumination in the scene. We call such 4D illumination data sets Incident Light Fields, (ILF).


Learning and Image Retrieval

 

In signal processing filter systems are usually designed. Biological vision systems, on the other hand are the result of evolutionary processes. This leads to the question how technical systems can be constructed by training them on empirical data. Methods based on fourth-order statistics were investigated in (NIPS94) and (Josaa96) where it was shown that filter systems trained on multispectral reflectance spectra from a color atlas have similar properties as the color matching functions used in color science. Harmonic analysis based systems have the property that they provide minimum-mean squared error approximations and that they have natural invariance properties. This makes them ideal candidates for retrieval systems where a large number of images have to be indexed and where retrieval time should be very short.