Distance theorems and the smallest singular value of random matrices

Speaker: 

Manuel Fernandez

Institution: 

USC

Time: 

Monday, November 10, 2025 - 2:00pm to 3:00pm

Location: 

340P Rowland Hall

In recent years, significant progress has been made in our understanding of the quantitative behavior of random matrices. One research direction of continued interest has been the estimation of the smallest singular value. A measurement of matrix’s ``invertibility’’, quantitative bounds on the smallest singular value are important for a variety of tasks including establishing a circular law for a non-Hermitian random matrix and for proving stability of numerical methods. In view of the universality phenomena of random matrices, one tries to prove these estimates for more general matrix ensembles satisfying weaker assumptions.

In the geometric approach to proving smallest singular value estimates a key ingredient is the use of a 'distance theorem', which is a small ball estimate for the distance between a random vector and subspace. In this talk we will discuss a new distance theorem and its application to proving smallest singular value estimates for inhomogeneous random rectangular matrix with independent entries. We will also discuss how the recent resolution of the Slicing Conjecture, due to Klartag, Lehec, and Guan, implies smallest singular values estimates for a number of log-concave random matrix ensembles. In some cases, independent entries are no longer necessary!

Variational Inference: Posterior Threshold Improves Network Clustering Accuracy in Sparse Regimes

Speaker: 

Can M. Le

Institution: 

UC Davis

Time: 

Monday, November 17, 2025 - 2:00pm to 3:00pm

Location: 

340P Rowland Hall

Variational inference has been widely used in machine learning literature to fit various Bayesian models. In network analysis, this method has been successfully applied to solve community detection problems. Although these results are promising, their theoretical support is limited to relatively dense networks, an assumption that may not hold for real networks. In addition, recent studies have shown that the variational loss surface has many saddle points, which can significantly impact its performance, especially when applied to sparse networks. This paper proposes a simple method to improve the variational inference approach by hard thresholding the posterior of the community assignment after each iteration. We show that the proposed method can accurately recover the true community labels, even when the average node degree of the network is bounded and the initialization is arbitrarily close to random guessing. An extensive numerical study further confirms the advantage of the proposed method over the classical variational inference and other algorithms.

From mixing time of Markov chains to Tracy-Widom law of inhomogeneous random matrices

Speaker: 

Guangyi Zou

Institution: 

UC Irvine

Time: 

Monday, October 13, 2025 - 2:00pm to 3:00pm

Location: 

340P Rowland Hall

Random matrix statistics appear across diverse fields and are now recognized as exhibiting universal behavior. However, even within random matrix theory itself, the mechanism underlying the universality of local eigenvalue statistics beyond the mean-field setting remains poorly understood.

In this talk, we consider symmetric and Hermitian random matrices whose entries are independent random variables with an arbitrary variance pattern. Under a novel Short-to-Long Mixing condition—sharp in the sense that it excludes any deterministic correction at the spectral edge—we establish GOE/GUE edge universality for such inhomogeneous random matrices, which may be sparse or far from the classical mean-field regime. This condition reduces the universality problem to verifying the mixing properties of Markov chains defined by the variance profile matrix. This talk is based on joint work with Dang-Zheng Liu.

Precise Error Rates for Computationally Efficient Testing

Speaker: 

Alex Wein

Institution: 

UC Davis

Time: 

Monday, November 24, 2025 - 2:00pm to 3:00pm

Location: 

340P Rowland Hall

We consider one of the most basic high-dimensional testing problems: that of detecting the presence of a rank-1 "spike" in a random Gaussian (GOE) matrix. When the spike has structure such as sparsity, inherent statistical-computational tradeoffs are expected. I will discuss some precise results about the computational complexity, arguing that the so-called "linear spectral statistics" achieve the best possible tradeoff between type I & II errors among all polynomial-time algorithms, even though an exponential-time algorithm can do better. This is based on https://arxiv.org/abs/2311.00289 with Ankur Moitra which uses a version of the low-degree polynomial heuristic, as well as forthcoming work with Ansh Nagda which gives a stronger form of reduction-based hardness.

Positivity of Schubert coefficients

Speaker: 

Collen Robichaux

Institution: 

UCLA

Time: 

Wednesday, October 8, 2025 - 3:00pm to 4:00pm

Location: 

RH 340P

Schubert coefficients are nonnegative integers that arise in Algebraic Geometry and play a central role in Algebraic Combinatorics. It is a major open problem whether they have a combinatorial interpretation. A related open problem seeks an algorithm to determine when a Schubert coefficient is positive. In this talk we explore this problem and present an algorithmic solution. This is joint work with Igor Pak.

Privacy for Free in the Overparameterized Regime

Speaker: 

Simone Bombari

Institution: 

IST Austria

Time: 

Wednesday, October 1, 2025 - 3:00pm to 4:00pm

Location: 

340P Rowland Hall

Abstract: Differentially private gradient descent (DP-GD) is a popular algorithm to train deep learning models with provable guarantees on the privacy of the training data. In the last decade, the problem of understanding its performance cost with respect to standard GD has received remarkable attention from the research community, which has led to upper bounds on the excess population risk in different learning settings. However, such bounds typically degrade with over-parameterization, i.e., as the number of parameters p gets larger than the number of training samples n — a regime which is ubiquitous in current deep-learning practice. As a result, the lack of theoretical insights leaves practitioners without clear guidance, leading some to reduce the effective number of trainable parameters to improve performance, while others use larger models to achieve better results through scale. In this work, we show that in the popular random features model with quadratic loss, for any sufficiently large p, privacy can be obtained for free, i.e., the with vanishing excess population risk, not only when the privacy parameter ε has constant order, but also in the strongly private setting ε = o(1). This challenges the common wisdom that over-parameterization inherently hinders performance in private learning.
 

Link to paper: https://www.pnas.org/doi/10.1073/pnas.2423072122

Extensions of Quantile-Based Randomized Kaczmarz Method

Speaker: 

Emeric Antonio Battaglia

Institution: 

UC Irvine

Time: 

Wednesday, June 4, 2025 - 3:00pm to 4:00pm

Location: 

510R Rowland Hall

Inspired by recent work on the quantile-randomized Kaczmarz method (qRK) for solving a linear system of equations, we propose an acceleration of the randomized Kaczmarz method using quantile information. We show that the method converges faster than the randomized Kaczmarz algorithm when the linear system is consistent. In addition, we demonstrate how this new acceleration may be used in conjunction with qRK, without additional computational complexity, to produce both a fast and robust iterative method for solving large, sparsely corrupted linear systems. Finally, we provide new error horizon bounds for qRK in the setting where the corruption may not be sparse.

The injective norm of some random tensors, and a few applications

Speaker: 

Ishaq Aden-Ali

Institution: 

UC Berkeley

Time: 

Wednesday, May 28, 2025 - 3:00pm to 4:00pm

Location: 

510R Rowland Hall

We'll look at the injective norm of random tensors drawn from a fairly general model. Our main result is an upper bound that improves on what was previously known in this setting. As an application, this leads to a very simple proof of Latała's bound on the moments of Gaussian chaoses—an inequality that generalizes the classical Hanson-Wright bound. I'll also explain how I first got interested in this question through a problem in coding theory.

Southern California Discrete Mathematics Symposium 2025

Institution: 

UCI

Time: 

Sunday, April 6, 2025 - 9:00am to 4:00pm

Location: 

NS II 1201

SoCalDM is a friendly, day-long research conference designed for discrete mathematicians in Southern California. For more details please visit the conference website:

https://sites.google.com/view/socaldm2025/home

Here is the schedule for the event: 

9:00-9:30 Welcome coffee

9:30-10:00 Jonathan Davidson (Cal State LA), A Combinatorial Design Approach to a Multicolor Bipartite Ramsey Problem

10:10-10:40 Lenny Fukshansky (Claremont McKenna College), On a new absolute version of Siegel’s lemma

10:40-11:00 Coffee Break

11:00-11:30 Mason Shurman (UCI), Covering Random Digraphs with Hamilton Cycles

11:40-12:10 Claire Levaillant (USC), Solutions to the Diophantine equation $\sum_{i=1}^n\frac{1}{x_i}=1$ in integers of the form p^a*q^b with p and q two distinct primes. 

12:10-2:00 Lunch

2:00-2:30 Sehun Jeong (Claremont Graduate University), Integral quadratic forms and lattice angles

2:40-3:10 Justin Troyka (Cal State LA), Growth rates of permutations with given descent or peak set

3:20-3:50 Yizhe Zhu (USC), CLTs for linear spectral statistics of inhomogeneous random graphs

Southern California Probability Symposium

Institution: 

UCI

Time: 

Saturday, April 5, 2025 - 9:00am to 5:30pm

Location: 

ISEB 1300

The Southern California Probability Symposium will take place, Saturday,  April 5 here at UCI. It will start with a continental breakfast at 9:00 am in ISEB 1300 and run until 5:30pm. Here's a link to the symposium web page: https://scps.pstat.ucsb.edu/SCPS2025.html

 

Here is a list of speakers and times. 

9:45 - 10:30 Lutz Warnke (UCSD)

10:45 - 11:30 Sixian Jin (CSUSM)

(Lunch - by their own) 

1:15 - 2:00 Moritz Voss 

2:15 - 3:00 Pedro Teixeira (UCI)

(Coffee break)

3:45 - 4:30 Lily Reeves (CAL TECH)

4:45 - 5:30 Jun Yin (UCLA)

Pages

Subscribe to RSS - Combinatorics and Probability