The Tenth IASTED International Conference on
Artificial Intelligence and Applications
AIA 2010

February 15 – 17, 2010
Innsbruck, Austria

KEYNOTE SPEAKER

Machine Learning for High-dimensional Data

Prof. Michel Verleysen
Universite catholique de Louvain, Belgium

Abstract

fiogf49gjkf0d
Machine learning is used nowadays to build models for classification and regression tasks, among others. The learning principle consists in designing the models based on the information contained in the dataset, with as few as possible a priori restriction about the class of models of interest.
While many paradigms exist and are widely used in the context of machine learning, most of them suffer from the "curse of dimensionality". The curse of dimensionality means that some strange phenomena appear when data are represented in a high-dimensional space. These phenomena are most often counter-intuitive: the conventional geometrical interpretation of data analysis in 2- or 3-dimensional spaces cannot be extended to much higher dimensions.
Among the problems related to the curse of dimensionality, the feature redundancy and concentration of the norm are probably those that have the largest impact on data analysis tools. Feature redundancy means that models will loose the identifiability property (for example they will oscillate between equivalent solutions), will be difficult to interpret, etc.; although it is an advantage on the point of view of information content in the data, the redundancy will make the learning of the model more difficult. The concentration of the norm is a more specific unfortunate property of high-dimensional vectors: when the dimension of the space increases, norms and distance will concentrate, making the discrimination between data more difficult.
Most data analysis tools are not robust to these phenomena. Their performance collapse when the dimension of the data space increases, in particular when the number of data available for learning is limited.
After a discussion of phenomena related to the curse of dimensionality, this talk will cover feature selection and manifold learning, two approaches to fight the consequences of the curse of dimensionality. Feature selection consists in selecting some of the variables/features among those available in the dataset, according to a relevance criterion. Filter, wrapper and embedded methods will be covered. Manifold learning consists in mapping the high-dimensional data to a lower-dimensional representation, while preserving some topology, distance or information criterion. Such nonlinear projection methods may be used both for dimensionality reduction, and for the visualization of data when the manifold dimension is restricted to 2 or 3.

Biography of the Keynote Speaker

Keynote Speaker Portrait

fiogf49gjkf0d
Michel Verleysen was born in 1965 in Belgium. He received the M.S. and Ph.D. degrees in electrical engineering from the Université catholique de Louvain (Belgium) in 1987 and 1992, respectively. He was an invited professor at the Swiss E.P.F.L. (Ecole Polytechnique Fédérale de Lausanne, Switzerland) in 1992, at the Université d'Evry Val d'Essonne (France) in 2001, and at the Université ParisI-Panthéon-Sorbonne from 2002 to 2009, respectively. He is now Professor at the Université catholique de Louvain, and Honorary Research Director of the Belgian F.N.R.S. (National Fund for Scientific Research). He is editor-in-chief of the Neural Processing Letters journal, chairman of the annual ESANN conference (European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning), past associate editor of the IEEE Trans. on Neural Networks journal, and member of the editorial board and program committee of several journals and conferences on neural networks and learning. He is author or co-author of more than 200 scientific papers in international journals and books or communications to conferences with reviewing committee. He is the co-author of the scientific popularization book on artificial neural networks in the series "Que Sais-Je?", in French, and of the "Nonlinear Dimensionality Reduction" book published by Springer in 2007. His research interests include machine learning, artificial neural networks, self-organization, time-series forecasting, nonlinear statistics, adaptive signal processing, and high-dimensional data analysis.