Speaker: 

Xue Feng

Institution: 

UCLA

Time: 

Monday, October 20, 2025 - 4:00pm to 5:00pm

Host: 

Location: 

RH 306

The Jordan–Kinderlehrer–Otto (JKO) scheme is a powerful framework for iteratively solving Wasserstein gradient flows (WGFs), but each JKO subproblem is often computationally expensive. We propose a neural JKO solution operator that efficiently solves WGFs for a family of parameterized energy functionals. A key challenge is that training data typically consists of only one—or a few—initial densities, rather than full trajectories. To address this, we introduce Learn-to-Evolve, an unsupervised algorithm that jointly learns the JKO operator and the trajectory data. The neural network being trained serves as an on-the-fly dynamic data generator: we alternate between generating trajectories with the current model and updating the operator with the newly produced data. This evolving dataset acts as natural data augmentation and improves generalization to unseen energies and initial conditions. This talk will present empirical accuracy and generalization across diverse benchmarks, together with convergence guarantees for the learning framework. I’ll also sketch extensions beyond the JKO setting to other iterative operators and close with an open discussion.