Speaker: 

Yiwei Wang

Institution: 

UC Riverside

Time: 

Monday, January 12, 2026 - 4:00pm

Host: 

Location: 

RH306

Many problems in physics, materials science, biology, and machine learning can be formulated as variational models, where multiscale coupling and competition are encoded through an energy–dissipation law. Preserving structures of variational models at the discrete level is crucial for accuracy and robustness, especially in long-time simulations. In this talk, I will present an energetic-variational, structure-preserving discretization framework for variational models. The key idea is to design algorithms based on the energy–dissipation law, rather than on strong- or weak-form PDE discretizations. Within this framework, we develop a memory-efficient, mesh-free neural-network discretization for gradient flows using a temporal-then-spatial discretization approach. As a representative example, we discuss a neural-network-based Lagrangian method for generalized diffusions (Wasserstein-type gradient flows), which yields an efficient Lagrangian implementation of the celebrated Jordan–Kinderlehrer–Otto (JKO) scheme.