# What are my Mathematical Interests?

## Speaker:

## Institution:

## Time:

## Host:

## Location:

# TBA

## Speaker:

## Institution:

## Time:

## Host:

## Location:

# TBA

## Speaker:

## Institution:

## Time:

## Host:

## Location:

# TBA

## Speaker:

## Institution:

## Time:

## Host:

## Location:

# TBA

## Speaker:

## Institution:

## Time:

## Host:

## Location:

# TBA

## Speaker:

## Institution:

## Time:

## Host:

## Location:

# Transformer Meets Boundary Value Inverse Problems

## Speaker:

## Speaker Link:

## Institution:

## Time:

## Host:

## Location:

A Transformer-based deep direct sampling method is proposed for solving a class of boundary value inverse problem. A real-time reconstruction is achieved by evaluating the learned inverse operator between carefully designed data and the reconstructed images. An effort is made to give a specific example to a fundamental but critical question: whether and how one can benefit from the theoretical structure of a mathematical problem to develop task-oriented and structure-conforming deep neural network? Specifically, inspired by direct sampling methods for inverse problems, the 1D boundary data are preprocessed by a partial differential equation-based feature map to yield 2D harmonic extensions in different frequencies as different input channels. Then, by introducing learnable non-local kernel, the approximation of direct sampling is recast to a modified attention mechanism. The proposed method is then applied to electrical impedance tomography, a well-known severely ill-posed nonlinear inverse problem. The new method achieves superior accuracy over its predecessors and contemporary operator learners, as well as shows robustness with respect to noise.

This research shall strengthen the insights that the attention mechanism, despite being invented for natural language processing tasks, offers great flexibility to be modified in conformity with the a priori mathematical knowledge, which ultimately leads to the design of more physics-compatible neural architectures.

This is a joint work with Ruchi Guo (UCI) and Shuhao Cao (University of Missouri-Kansas City).

# Optimal transport and the Monge-Ampere equation

## Speaker:

## Institution:

## Time:

## Host:

## Location:

The optimal transport problem asks: What is the cheapest way to transport goods (e.g. bread in bakeries) to desired locations (e.g. grocery stores)? Although simple to state, this problem is tricky to solve. Optimal transport is closely related to a nonlinear PDE called the Monge-Ampere equation, and important questions about optimal transport can be approached using this connection. In this talk we will discuss optimal transport, its connection to the Monge-Ampere equation, and some recent applications of optimal transport theory in geometric and functional inequalities and meteorology.

# Efficient Deep Neural Networks and a Deep Particle Method for PDEs

## Speaker:

## Speaker Link:

## Institution:

## Time:

## Host:

## Location:

We introduce mathematical methods for reducing complexity of deep neural networks

in the context of computer vision for mobile and IoT applications such as sparsification and differentiable architecture search. We also describe applications in infectious disease prediction, and a deep learning and optimal

transport (the deep particle) method in predicting invariant measures of

stochastic dynamical systems arising in partial differential

equation (PDE) modeling of transport in chaotic flows (e.g. rapid stirring of coffee and milk, raging forest fires in the wind).