Speaker: 

Rayan Saab

Institution: 

UCSD

Time: 

Monday, May 2, 2022 - 4:00pm

Host: 

Location: 

Zoom - https://uci.zoom.us/j/97796361534

Neural networks are highly non-linear functions often parametrized by a staggering number of parameters, called weights. They have been a subject of intense research due in large part to their remarkable success in a wide range of application areas.  Miniaturizing these networks and implementing them in hardware is a direction of research that is fueled by a practical need, and at the same time connects to interesting mathematical problems. For example, by quantizing, or replacing the weights of a neural network with quantized (e.g., binary) counterparts, massive savings in cost, computation time, memory, and power consumption can be attained. Of course, one wishes to attain these savings while preserving the action of the function on domains of interest. We propose a new data-driven and computationally efficient method for quantizing the weights of already trained neural networks. We prove that this method has favorable error guarantees for a single-layer neural network (or, alternatively, for quantizing the first layer of a multi-layer network) when the training data are random from appropriate distributions. We also discuss extensions and provide the results of numerical experiments, on large multi-layer networks, to illustrate the  performance of our methods. Time permitting, we will also discuss open problems and related areas of research.