Speaker: 

Jonathan Siegel

Institution: 

The Pennsylvania State University

Time: 

Monday, March 15, 2021 - 4:00pm to 5:00pm

Host: 

Location: 

https://uci.zoom.us/j/98883924034

We consider the problem of approximating high dimensional functions using shallow neural networks, and more generally by sparse linear combinations of elements of a dictionary. We begin by introducing natural spaces of functions which can be efficiently approximated in this way. Then, we derive the metric entropy of the unit balls in these spaces, which allows us to calculate optimal approximation rates for approximation by shallow neural networks. Next, we show that higher approximation rates can be obtained by further restricting the function class under consideration. In particular, on a restrictive but natural space of functions, shallow networks with ReLU$^k$ activation function achieve an approximation rate of $O(n^{-(k+1)})$ in every dimension. Finally, we discuss the connections between this surprising result and the finite element method.

Zoom link

https://uci.zoom.us/j/98883924034