Description: Since the launch of the first version of the Microsoft Kinect in 2010, setting up networks based on multimodal image sensors has become extremely popular. The novelty of these devices includes the availability of not only color information, but also infrared and depth information of a scene, at a price affordable to laymen. The combination of multiple sensors and image modalities has many advantages, such as simultaneous coverage of large environments, increased resolution, redundancy, multimodal scene information, and robustness against occlusion. However, in order to exploit these benefits, multiple challenges also need to be addressed: synchronization, calibration, registration, multi-sensor fusion, large amounts of data, and last but not least, sensor-specific stochastic and set-valued uncertainties. This Special Session addresses fundamental techniques, recent developments and future research directions in the field of multimodal image processing and fusion.
Organizers: Antonio Zea, Florian Faion, Uwe D. Hanebeck