Approximation Properties of Reproducing Kernels
ABSTRACT
Reproducing kernel Hilbert spaces (RKHSs) play an important role in machine learning methods such as kernel-mean-embeddings and regularizedkernel-learning including support vector machines (SVMs). A key aspect for understanding these methods are approximation properties of RKHSs. In the first part of this talk I will review some approximation result for generic kernels, illustrate them for the special case of Sobolev kernels, and finally discuss the case of Gaussian RBF kernels. In the second part, I will try to explore the so far mostly unexploited flexibility of kernels: Here, I will show that using a simple sum construction for locally defined kernels makes it possible to quickly train SVMs even on millions of samples. Furthermore, I discuss a class of kernels whose structure mimics parts of deep neural network architectures.