Week of May 26, 2024

Tue May 28, 2024
2:00pm to 5:45pm - ISEB 1010 - Differential Geometry
- (Special Seminar)
Special Seminar in Geometric Analysis

Talk Schedule (each talk 30-35 minutes)

2:00 PM   Alex Mramor (University of Copenhagen)

2:45 PM   Kai-Wei Zhao (Notre Dame)

3:30 PM   Hongyi Sheng (UC San Diego)

4:15 PM   Tin Yau Tsang (New York University)

5:00 PM   Xiaolong Li (Wichita State University)

 

Titles/Abstracts

Speaker: Alex Mramor (University of Copenhagen)
Title: On the Unknottedness of Self Shrinkers
Abstract: The mean curvature flow, the natural analogue of the heat equation in submanifold geometry, often develops singularities and roughly speaking these singularities are modeled on self shrinkers, which are surfaces that give rise to mean curvature flows that move by dilations. it happens that self shrinkers are minimal surfaces in a metric which, while poorly behaved, is Ricci positive in a certain sense so it is natural, for instance, to ask what type of qualities shrinkers have in common with minimal surfaces in the round 3-sphere. Inspired by an old work of Lawson on such surfaces in this talk we discuss some unknottedness results for self shrinkers in R3, some of which are joint work with S. Wang.

 

Speaker: Kai-Wei Zhao (Notre Dame)
Title: Uniqueness of Tangent Flows at Infinity for Finite-Entropy Shortening Curves
Abstract: Curve shortening flow is, in compact case, the gradient flow of arc-length functional. It is the simplest geometric flow and is a special case of mean curvature flow. The classification problem of ancient solutions under some geometric conditions can be view as a parabolic analogue of geometric Liouville theorem. The previous results technically reply on the assumption of convexity of the curves. In the ongoing project joint with Kyeongsu Choi, Donghwi Seo, and Weibo Su, we replace it by the boundedness of entropy, which is a measure of geometric complexity defined by Colding and Minicozzi. In this talk, we will prove that an ancient smooth curve shortening flow with finite-entropy embedded in R2 has a unique tangent flow at infinity. To this end, we show that its rescaled flows backwardly converge to a line with multiplicity m≥3 exponentially fast in any compact region, unless the flow is a shrinking circle, a static line, a paper clip, or a translating grim reaper. In addition, we figure out the exact numbers of tips, vertices, and inflection points of the curves at negative enough time. Moreover, the exponential growth rate of graphical radius and the convergence of vertex regions to grim reaper curves will be shown.

 

Speaker: Hongyi Sheng (UC San Diego)
Title: Localized Deformations and Gluing Constructions in General Relativity
Abstract: Localized deformations play an important role in gluing constructions in general relativity. In this talk, we will review some recent localized deformation theorems and their applications regarding rigidity and non-rigidity type results.

 

Speaker: Tin Yau Tsang (New York University)
Title: Mass for the Large and the Small
Abstract: The positive mass theorem concerns the mass of large manifolds. In this talk, we will first review the proofs by Schoen and Yau, then the proof by Witten. Combining these with their recent generalisations turns out to help us understand the mass of small manifolds.

 

Speaker: Xiaolong Li (Wichita State University)
Title: Recent Developments on the Curvature Operator of the Second Kind
Abstract: In this talk, I will first introduce the curvature operator of the second kind and talk about the resolution of Nishikawa's conjecture by Cao-Gursky-Tran, myself, and Nienhaus-Petersen-Wink. Then I will talk about some ongoing research with Gursky concerning negative lower bounds of the curvature operator of the second kind. Along the way, I will mention some interesting problems.

3:00pm to 4:00pm - RH 440R - Machine Learning
Eric Liu - (SDSU and UCI )
Enhancing Model Efficiency: Applications of Tensor Train Decomposition in Machine Learning

The application of Tensor Train (TT) decomposition in machine learning models provides a promising approach to addressing challenges related to model size and computational complexity. TT decomposition, by breaking down high-dimensional weight tensors into smaller, more manageable tensor cores, allows for significant reductions in model size while maintaining performance. This presentation will explore how TT decomposition can be effectively used in different types of models.

TT decomposition is adopted differently in recurrent models, Convolutional Neural Networks (CNN), and Binary Neural Networks (BNN). In recurrent models like Long Short-Term Memory (LSTM), large weight matrices are transformed into smaller, manageable tensor cores, reducing the number of parameters and computational load. For CNNs, TT decomposition targets the convolutional layers, transforming convolutional filters into tensor cores to preserve spatial structure while significantly reducing parameters. In BNNs, TT decomposition is combined with weight binarization, resulting in extremely compact models that retain essential information for accurate predictions even with minimal computational power and memory.

The primary aim of this presentation is to explore the theoretical foundations and practical applications of TT decomposition, demonstrating how this technique optimizes various machine learning models. The findings suggest that TT decomposition can greatly enhance model efficiency and scalability, making it a valuable tool for a wide range of applications.

Wed May 29, 2024
2:00pm to 3:00pm - 510R Rowland Hall - Combinatorics and Probability
Dylan King - (Caltech)
Online Ordered Ramsey Numbers

The Ramsey number of graphs G1 and G2, the smallest N so that any red/blue coloring of the N-clique contains either a red G1 or a blue G2, is one of the most studied notions in combinatorics. We study a related process called the online ordered Ramsey game, played between two players, Builder and Painter. Builder has two graphs, G1 and G2, each of which has a specific ordering on its vertices. Builder starts with an edgeless graph on an ordered vertex set (the integers) and attempts to build either an ordered red copy of G1 or an ordered blue copy of G2 by adding one edge at a time. When Builder adds an edge, Painter is required to decide, at the time of creation, whether an edge is red or blue. Ramsey’s Theorem tells us that Builder can eventually win; their objective is to do so using the minimum number of turns, and Painter’s objective is to delay them as long as possible. The *online ordered Ramsey number* of G1 and G2 is the number of turns taken when both players play optimally.

Online ordered Ramsey numbers were introduced by Perez-Gimenez, P. Pralat, and West in 2021. In this talk we will discuss their relation to other types of Ramsey numbers and present some results on the case when at least one of G1,G2 is sparse.

(Joint work with Emily Heath, Grace McCourt, Hannah Sheats, and Justin Wisby)

Thu May 30, 2024
9:00am to 9:50am - Zoom - Inverse Problems
Tony Liimatainen - (University of Helsinki)
The general Calderón problem on Riemannian surfaces and inverse problems for minimal surfaces

https://sites.uci.edu/inverse/

1:00pm to 1:50pm - RH 510R - Algebra
Ariel Rosenfield - (UCI)
Enriched Grothendieck topologies under change of base

In the presence of a monoidal right adjoint G : V -> U between locally finitely presentable symmetric monoidal categories, we examine the behavior of V-Grothendieck topologies on a V-category C, and that of their constituent covering sieves, under the change of enriching category induced by G. We prove in particular that when G is faithful and conservative, any V-Grothendieck topology on C corresponds uniquely to a U-Grothendieck topology on G_*C, and that when G is fully faithful, base change commutes with enriched sheafification in the sense of Borceux-Quinteiro.

4:00pm to 5:00pm - RH 340P - Applied and Computational Mathematics
Linh Huynh - (Dartmouth)
Inference on Large Random Networks: A Statistical Mechanics Perspective via a Modern Hopfield Network Model

In this talk, I will discuss one of my research themes on inferring microscopic mechanisms that give rise to macroscopic behaviors of the systems. This theme is a series of joint works with different collaborators, but I will focus on my most recent work with Ethan Levien. Motivated by the applications of pattern recognition, image classification, network reconstruction, etc., we consider a modern Hopfield network (a spin glass model) whose configuration space is R^N. We randomly select M = exp(alpha*N) configurations that are independent and identically distributed according to the standard Gaussian distribution. We call these configurations “patterns”. Given a vector x^0 that is sufficiently close to a typical pattern, say xi^1, we analyze the convergence in probability of x^0 to xi^1. We will also discuss the local minima behavior of the corresponding energy landscape. 

Fri May 31, 2024
1:00pm - RH 114 - Graduate Seminar
Zhiqin Lu - (UCI)
Hearing the shape of a triangle