The Eleventh IASTED International Conference on
Computer Graphics and Imaging
CGIM 2010

February 17 – 19, 2010
Innsbruck, Austria

TUTORIAL SESSION

Image Fusion - Principles, Methods, and Applications

Prof. Jan Flusser
Institute of Information Theory and Automation, Czech Republic
flusser@utia.cas.cz

Dr. Filip Sroubek
Institute of Information Theory and Automation, Czech Republic
sroubekf@utia.cas.cz

Dr. Barbara Zitova
Institute of Information Theory & Automation, Czech Republic
zitova@utia.cas.cz

Duration

3-4 hours

Abstract

fiogf49gjkf0d
This tutorial aims to present a review of recent as well as traditional image fusion methods of various kinds with special emphasis on fusion for restoration and superresolution purposes. The reviewed approaches are classified according to the type of the input images and according to the fusion purpose. Main contributions, advantages and drawbacks of the methods will be discussed in the tutorial. Many practical examples from various application areas (surveillance, medical imaging, remote sensing, robot vision, and astronomy) will be demonstrated. Problematic issues of image fusion and outlook for the future research will be discussed too.
The major goals of the tutorial are

Objectives

fiogf49gjkf0d
The term Image Fusion (IF) means in general an approach to extraction of information spontaneously adopted in several domains. The goal of image fusion is to integrate complementary multisensor, multitemporal and/or multiview information into one new image containing information the quality of which cannot be achieved otherwise. The term "quality" depends on the application requirements.
Image fusion has been used in many application areas. In remote sensing and in astronomy, multisensor fusion is used to achieve high spatial and spectral resolution by combining images from two sensors, one of which has high spatial resolution and the other one high spectral resolution. Numerous fusion applications have appeared in medical imaging, like simultaneous evaluation of CT (computer tomography), NMR (nuclear magnetic resonance) and/or PET (positron emission tomography) images to obtain more complete information about the patient, and in military applications (combining visible and infrared or radar data for target localization and missile navigation). In the case of multiview fusion, a set of images of the same scene taken by the same sensor but from different viewpoints is fused to obtain an image with higher resolution than the sensor normally provides or to recover the 3D representation of the scene (shape from stereo). The multitemporal approach recognizes two different aims. Images of the same scene are acquired at different time instances either to find and evaluate changes in the scene or to obtain a less degraded image of the scene. The former aim is common in medical imaging, especially in change detection of organs and tumors, and in remote sensing for monitoring land or forest exploitation. The acquisition period is usually months or years. The latter aim requires the different measurements to be much closer to each other, typically in the scale of seconds, and possibly under different conditions. Recent development of the field has proved that IF can be also a useful tool for resolution enhancement.
The list of applications mentioned above illustrates the diversity of problems we face when fusing images. It is impossible to design a universal method applicable to all image fusion tasks. Every method should take into account not only the fusion purpose and the characteristics of individual sensors, but also particular imaging conditions, imaging geometry, noise corruption, required accuracy and application-dependent data properties.

Tutorial Materials

fiogf49gjkf0d
In this tutorial, we categorize the IF methods according to the data entering the fusion and according to the fusion purpose. We distinguish the following categories.

In each category, fusion consists of two basic steps: image registration, which brings the input images to spatial alignment, and combining the image functions (intensities, colors, etc). We present a survey of traditional and up-to-date fusion methods and demonstrate their performance by practical experiments from various application areas.
Special attention is paid to Fusion for image restoration and to Superresolution fusion, because these two groups are extremely important for producers and users of low-resolution imaging devices such as mobile phones, camcorders, web cameras, and security and surveillance cameras.
We propose a unifying system that simultaneously estimates blurs and recovers the original undistorted image, all in high resolution, without any prior knowledge of the blurs and original image. We accomplish this by formulating the problem as constrained least squares energy minimization with appropriate regularization terms, which guarantee close-to-perfect solution.
We demonstrate the performance of the superresolution fusion on many examples, namely on car license plate recognition and face recognition. Live demo showing the fusion of webcam images will run on a laptop during the tutorial.
An example of superresolution fusion: one of four input low-resolution images (left), high-resolution fused product (right)

Target Audience

fiogf49gjkf0d
The target audiences of the tutorial are researchers from all application areas who need to integrate and fuse image data of various kind as well as the specialists in image fusion interested in a new development of this field.

Presenters

Qualifications of the Instructor(s)s

fiogf49gjkf0d
Jan Flusser, Barbara Zitova, and Filip Sroubek are the co-authors of numerous tutorials at international conferences (ICIP'05, ICIP'07, EUSIPCO'07, SCIA'09, SPPRA'09, ICIP'09). Jan Flusser is an author/co-author of several invited and keynote talks at international conferences (Digital Image Computing DICTA'07, Computational Statistics COMPSTAT'06, Workshop of Information Optics WIO'06, NATO ASI Workshop on Imaging for detection and localization 06, Int'l. Conf. Computer Science ICCS'06, to name the most recent ones).

Tutorial Session Portrait

fiogf49gjkf0d
Dr. Jan Flusser received the M.Sc. Degree in mathematical engineering from the Czech Technical University, Prague, Czech Republic in 1985 and the Ph.D. degree in computer science from the Czechoslovak Academy of Sciences in 1990. Since 1985 he has been with the Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic, Prague. Since 1995 he has been holding the position of a head of Department of Image Processing. Since 1991 he has been also affiliated with the Faculty of Mathematics and Physics, Charles University, Prague and with the Czech Technical University, Prague (full professorship in 2004), where he gives undergraduate and graduate courses on Digital Image Processing and Pattern Recognition. Jointly with B. Zitova he gives specialized graduate course on moment invariants and wavelets.
Jan Flusser has a 20-years experience in basic and applied research on the field of invariant-based pattern recognition. He has been involved in applications in remote sensing, medicine, and astronomy.
He has authored and coauthored more than 100 research publications in these areas. Some of his journal papers became classical and are frequently cited. Jan Flusser is a Senior Member of the IEEE.

fiogf49gjkf0d
Dr. Filip Sroubek received the M.Sc. degree in computer science from the Czech Technical University, Prague, Czech Republic in 1998 and the Ph.D. degree in computer science from the Charles University, Prague, Czech Republic in 2003. From 2004 to 2006, he was on a postdoctoral position in the Instituto de Optica, CSIC, Madrid, Spain. He is currently with the Institute of Information Theory and Automation and partially also with the Institute of Radio Engineering and Electronics, both part of the Academy of Sciences of the Czech Republic, Prague.
Filip Sroubek is an author of two book chapters and over 25 journal and conference papers on image fusion, blind deconvolution, super-resolution, and related topics.

Tutorial Session Portrait

fiogf49gjkf0d
Dr. Barbara Zitova received the M.Sc. degree in computer science from the Charles University, Prague, Czech Republic in 1995 and the Ph.D. degree in computer science from the Charles University, Prague, Czech Republic in 2000. Since 1995 she has been with the Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic, Prague. She also gives tutorials on Image Processing and Pattern Recognition at the Czech Technical University. Jointly with J. Flusser, she gives specialized graduate course on moment invariants and wavelets.
Barbara Zitova has a 10-years experience in image analysis. She is an author of a book chapter in Invariants for Pattern Recognition and Classification (M.A. Rodrigues ed., World Scientific, 2000) and of 20 journal and conference papers on moment invariants and related topics. Her paper "Image Registration Methods: A Survey", Image and Vision Computing, vol. 21, pp. 977-1000, 2003, has become a major reference in image registration.

References

[1] 
fiogf49gjkf0d
Sroubek F., Flusser J., Zitova B. : "Image Fusion: A Powerful Tool for Object Identification", in:imaging for Detection and Identification, (Byrnes J. ed.), pp. 107-128, Springer, 2006
[2] Sroubek F., Flusser J. : "Fusion of Blurred Images", in: Multi-Sensor Image Fusion and Its Applications, Blum R. and Liu Z. eds., CRC Press, Signal Processing and Communications Series, vol. 25,pp. 423-449, 2005
[3] B. Zitova and J. Flusser, "Image Registration Methods: A Survey", Image and Vision Computing,vol. 21, pp. 977-1000, 2003