Proceedings of the 23rd International Con- ference on Machine Learning, Pittsburgh, PA, 2006
We address the problem of learning a kernel for a given supervised learning task. Our ap- proach consists in searching within the con- vex hull of a prescribed set of basic kernels for one which minimizes a convex regularization functional. A unique feature of this approach compared to others in the literature is that the number of basic kernels can be infinite. We only require that they are continuously parameterized. For example, the basic ker- nels could be isotropic Gaussians with vari- ance in a prescribed interval or even Gaus- sians parameterized by multiple continuous parameters. Our work builds upon a formula- tion involving a minimax optimization prob- lem and a recently proposed greedy algorithm for learning the kernel. Although this opti- mization problem is not convex, it belongs to the larger class of DC (difference of convex functions) programs. Therefore, we apply re- cent results from DC optimization theory to create a new algorithm for learning the ker- nel. Our experimental results on benchmark data sets show that this algorithm outper- forms a previously proposed method.
Submitted to ORA: