The goal of this tutorial is to present in simple and insightful geometric arguments, a recent family of algorithms that build around the elegant notion of projections onto convex sets. Instead of trying to minimize a single loss function, over the given set of training data, a different path is employed. Measurements are received sequentially in time, and each measurement point defines a region in which the solution lies, with high probability. Hence, the main goal of this methodology is the most natural one; that is, to locate a point in the intersection of all these data-defined convex regions. The tool which is employed is that of* projections*, hence all algorithms exhibit linear computational dependence, on the number of parameters to be estimated. After all, projections are inner products.

The way the theory is developed allows for a *single recursion* to describe the most general formulation of classification as well as regression tasks. All is needed, each time the problem changes, is to modify accordingly the respective projection operator, which is involved in the updating recursion. Moreover, if convex constraints are present, all is needed is to perform an extra projection per time recursion, for each one of the available constraints, hence retaining the linear dependence of the computations on the number of free parameters. Finally, since only inner products are involved, the methodology lends readily itself for operations in Reproducing kernel Hilbert spaces (RKHS) , hence it offers itself for treating nonlinear tasks.

The way this tutorial will evolve is to present the different “faces”, that the updating recursion takes, in the context of specific applications:

Ø The unconstrained task will be demonstrated in the context of classification as well as channel estimation/equalization. For each one of the problems, one has to “learn” to build the appropriate convex region that defines the set of possible solutions.

Ø The constrained case will be demonstrated in the context of robust beamforming, where not only linear but also more complex constraints for robust beamforming will be presented.

Ø Sparsity-aware learning, in the context of compressive sensing, will then be discussed. This is also a constrained problem, where the constraint is typically the l_{1 }norm. However, by exploiting the potential of our methodology, we will see that other alternatives, to the l_{1} norm, can also be accommodated, which are more appropriate for adaptive learning. The computation dependence still remains linear.

Ø Finally, the notion of RKHS will be presented and the potential of our methodology to operate in such spaces, in order to tackle elegantly nonlinear tasks, will be discussed. Still, working in such high dimensional spaces will be guided by the single recursive equation, as mentioned before. The projection operator will be adjusted to account for the new space, in which the activity takes place. Computational linear dependence is not affected.

Although, the theory is associated with strong convergence results, that have been developed for each one of the cases mentioned before, we will not delve in such mathematical details. The goal is to present the algorithmic family around its main concepts, using MATLAB examples for each one of the cases, in order to demonstrate how to play with the involved ‘trimming” parameters. The MATLAB code will be given to the participants.

**Reference**

S. Theodoridis, K. Slavakis, I. Yamada “Adaptive Learning in a World of Projections”, IEEE Signal Processing Magazine, Vol.28(1), pp. 97-123, January 2011.

**Sergios Theodoridis **is currently Professor of Signal Processing and Communications in the Department of Informatics and Telecommunications of the University of Athens. His research interests lie in the areas of Adaptive Algorithms and Communications, Machine Learning and Pattern Recognition, Signal Processing for Audio Processing and Retrieval. He is the co-editor of the book “Efficient Algorithms for Signal Processing and System Identification”, Prentice Hall 1993, the co-author of the best selling book “Pattern Recognition”, Academic Press, 4^{th} ed. 2008, the co-author of the book “Introduction to Pattern Recognition: A MATLAB Approach”, Academic Press, 2009, and the co-author of three books in Greek, two of them for the Greek Open University.

He is the co-author of six papers that have received best paper awards including the 2009 IEEE Computational Intelligence Society Transactions on Neural Networks Outstanding paper Award. He has served as an IEEE Signal Processing Society Distinguished Lecturer.

He was the general chairman of EUSIPCO-98, the Technical Program co-chair for ISCAS-2006 and co-chairman and co-founder of CIP-2008 and co-chairman of CIP-2010. He has served as President of the European Association for Signal Processing (EURASIP) and as member of the Board of Governors for the IEEE CAS Society. He currently serves as member of the Board of Governors (Member-at-Large) of the IEEE SP Society.

He has served as a member of the Greek National Council for Research and Technology and he was Chairman of the SP advisory committee for the Edinburgh Research Partnership (ERP). He has served as vice chairman of the Greek Pedagogical Institute and he was for four years member of the Board of Directors of COSMOTE (the Greek mobile phone operating company). He is Fellow of IET, a Corresponding Fellow of RSE, a Fellow of EURASIP and a Fellow of IEEE.