Speaker: 

Bao Wang

Institution: 

UCLA

Time: 

Monday, May 14, 2018 - 4:00pm to 5:00pm

Host: 

Location: 

RH 306

First, I will present the Laplacian smoothing gradient descent proposed recently by Prof. Stan Osher. We show that when applied to a variety of machine learning models including softmax regression, convolutional neural nets, generative adversarial nets, and deep reinforcement learning, this very simple surrogate of gradient descent can dramatically reduce the variance and improve the accuracy of the generalization. The new algorithm, (which depends on one nonnegative parameter) when applied to non-convex minimization, tends to avoid sharp local minima. Instead it seeks somewhat flatter local(and often global) minima. The method only involves preconditioning the gradient by the inverse of a tri-diagonal matrix that is positive definite. The motivation comes from the theory of Hamilton-Jacobi partial differential equations. This theory demonstrates that the new algorithm is almost the same as doing gradient descent on a new function which (a) has the same global minima as the original function and (b) is "more convex". Second, I will talk about modeling, simulation, and experiments of the micro-encapsulation of droplets. This is a work joint with Professors Andrea Bertozzi, Dino Di Carlo, and Stan Osher’s groups.