Week of October 3, 2021

Mon Oct 4, 2021
12:00pm - Zoom - Probability and Analysis Webinar
Marina Iliopoulou - (University of Kent, UK)
Sharp L^p estimates for oscillatory integral operators of arbitrary signature

The restriction problem in harmonic analysis asks for L^p bounds on the Fourier transform of functions defined on curved surfaces. In this talk, we will present improved restriction estimates for hyperbolic paraboloids, that depend on the signature of the paraboloids. These estimates still hold, and are sharp, in the variable coefficient regime. This is joint work with Jonathan Hickman

 

Zoom ID is available here:
https://sites.google.com/view/paw-seminar

4:00pm to 5:00pm - Online - Applied and Computational Mathematics
Shuhao Cao - (Washington University in St. Louis)
Galerkin Transformer

Transformer in "Attention Is All You Need" is now THE ubiquitous architecture in every state-of-the-art model in Natural Language Processing (NLP), such as BERT. At its heart and soul is the "attention mechanism". We apply the attention mechanism the first time to a data-driven operator learning problem related to partial differential equations. Inspired by Fourier Neural Operator which showed a state-of-the-art performance in parametric PDE evaluation, an effort is put together to explain the heuristics of, and to improve the efficacy of the attention mechanism. It is demonstrated that the widely-accepted "indispensable" softmax normalization in the scaled dot-product attention is sufficient but not necessary. Without the softmax normalization, the approximation capacity of a linearized Transformer variant can be proved to be on par with a Petrov-Galerkin projection layer-wise. Some simple changes mimicking projections in Hilbert spaces are applied to the attention mechanism, and it helps the final model achieve remarkable accuracy in operator learning tasks with unnormalized data, surpassing the evaluation accuracy of the classical Transformer applied directly by 100 times. Meanwhile in all experiments including the viscid Burgers' equation, an interface Darcy flow, an inverse interface coefficient identification problem, and Navier-Stokes flow in the turbulent regime, the newly proposed simple attention-based operator learner, Galerkin Transformer, shows significant improvements in both speed and evaluation accuracy over its softmax-normalized counterparts, as well as other linearizing variants such as Random Feature Attention or FAVOR+ in Performer. In traditional NLP benchmark problems such as IWSLT 14 De-En, the Galerkin projection-inspired tweaks in the attention-based encoder layers help the classic Transformer reach the baseline BLEU score much faster.

The code to replicate our results is available at https://github.com/scaomath/galerkin-transformer .

Join Zoom Meeting
https://uci.zoom.us/j/97895390415?pwd=eUpNazVCNThKek5vbisxeldSOXJEZz09

Meeting ID: 978 9539 0415
Passcode: 1004

 

Tue Oct 5, 2021
1:00pm - Zoom - Dynamical Systems
Anton Gorodetski - (UC Irvine)
A brief introduction to complex dynamics

We will go over the basic notions and objects in complex dynamics (normal families, Fatou and Julia sets, Mandelbrot set) and consider some examples. The talk is a precursor to the talk by Michael Yampolsky (Toronto University) that is scheduled on October 19. 

4:00pm - NS2 1201 - Differential Geometry
Li-Sheng Tseng - (UC Irvine)
Symplectic flat bundle
Wed Oct 6, 2021
2:00pm to 3:00pm - 510R - Combinatorics and Probability
Anna Ma - (UCI)
Gaussian Spherical Tessellations and Learning Adaptively

Signed measurements of the form $y_i = sign(\langle a_i, x \rangle)$ for $i \in [M]$ are ubiquitous in large-scale machine learning problems where the overarching task is to recover the unknown, unit norm signal $x \in \mathbb{R}^d$. Oftentimes, measurements can be queried adaptively, for example based on a current approximation of $x$, leading to only a subset of the $M$ measurements being needed. Geometrically, these measurements emit a spherical hyperplane tessellation in $\mathbb{R}^{d}$ where one of the cells in the tessellation contains the unknown vector $x$. Motivated by this problem, in this talk we will present a geometric property related to spherical hyperplane tessellations in $\mathbb{R}^{d}$. Under the assumption that $a_i$ are Gaussian random vectors, we will show that with high probability there exists a subset of the hyperplanes whose cardinality is on the order of $d\log(d)\log(M)$ such that the radius of the cell containing $x$ induced by these hyperplanes is bounded above by, up to constants, $d\log(d)\log(M)/M$. The work presented is joint work with Rayan Saab and Eric Lybrand. 

Thu Oct 7, 2021
9:00am to 10:00am - Zoom - Inverse Problems
Fioralba Cakoni - (Rutgers University)
Singularities Almost Always Scatter: Regularity Results for Non-scattering Inhomogeneities

https://sites.uci.edu/inverse/

11:00am - zoom ID: 949 5980 5461. Password: the last four digits of the zoom ID in the reverse order - Harmonic Analysis
March Boedihardjo - (UCI)
Estimating average of function on Boolean cube without risk

Strong law of large numbers gives a method to estimate the
average of a function on the Boolean cube so that it is accurate with
high probability. But there is still a little risk that it is
inaccurate. I will present a polynomial time method to estimate the
averages of certain functions on Boolean cube without risk of being
inaccurate.