Dense kernel matrices arise frequently in many scientific computing and machine learning applications. For large-scale applications, it is common to exploit low-rank property associated with kernel matrices to accelerate the computation. Various fast algorithms have been developed under certain assumptions on the dataset and the kernel function in the past. In this talk, we consider the low rank approximation problem in the most general setting. The proposed scheme selects a small subset of data points that accounts for the geometry of the given data and the numerical stability of the resulting factorization. The entire procedure is performed over the given data and does not require access to the kernel matrix or points outside the give data. The algorithm scales as O(r2(m + n)) for computing a rank-r approximation to an m × n kernel matrix. The proposed method outperforms existing state-of-the-art methods on various kernel functions and datasets ranging from three dimensions to over a hundred dimensions.
Join URL: https://uci.zoom.us/j/93282432501