Data-driven Breakpoint detection with the QML estimator

Speaker: 

Georg Menz

Institution: 

UCLA

Time: 

Monday, February 23, 2026 - 2:00pm to 3:00pm

Location: 

340P Rowland Hall

 We study quasi-maximum likelihood (QML) breakpoint estimation for
 covariance regime switches in multivariate time series. We move beyond
 the classical framework and show that regime switches can be detected
 as soon as the signal-to-noise ratio is high enough. We identify a
 quantitative global recovery threshold that compares signal separation
 between regimes to signal fluctuations within regimes, and show its
 sharpness via an explicit counterexample.  We also discuss further
 developments in breakpoint detection in high dimensions.

 Joint work with Hubeyb Gurdogan (UCLA)

Recovery limits for geometric planted matchings beyond Gaussian assumptions

Speaker: 

Lucas R. Schwengber

Institution: 

UC Berkeley

Time: 

Monday, March 2, 2026 - 2:00pm to 3:00pm

Location: 

340P Rowland Hall

We consider the problem of recovering an unknown planted matching between a set of $n$ randomly placed points in \mathbb{R}^d and random perturbations of these points. Some recent works have established results for the error rates for this problem under Gaussian assumptions for both initial positions and noise at different scaling regimes of sample size and dimension. I will discuss some recent progress on establishing results which hold under general distributional assumption for both the the initial positions and noise. More precisely, I will show a general recipe to establish lower bounds via showing the existence of large matchings in random geometric graphs, which leads to simplified and generalized proofs of previous results. Time allowing I will also make some remarks regarding sufficient conditions for perfect recovery in high-dimensions for models where the noise is not isotropic. This is joint work with Roberto Oliveira.

Distance theorems and the smallest singular value of random matrices

Speaker: 

Manuel Fernandez

Institution: 

USC

Time: 

Monday, November 10, 2025 - 2:00pm to 3:00pm

Location: 

340P Rowland Hall

In recent years, significant progress has been made in our understanding of the quantitative behavior of random matrices. One research direction of continued interest has been the estimation of the smallest singular value. A measurement of matrix’s ``invertibility’’, quantitative bounds on the smallest singular value are important for a variety of tasks including establishing a circular law for a non-Hermitian random matrix and for proving stability of numerical methods. In view of the universality phenomena of random matrices, one tries to prove these estimates for more general matrix ensembles satisfying weaker assumptions.

In the geometric approach to proving smallest singular value estimates a key ingredient is the use of a 'distance theorem', which is a small ball estimate for the distance between a random vector and subspace. In this talk we will discuss a new distance theorem and its application to proving smallest singular value estimates for inhomogeneous random rectangular matrix with independent entries. We will also discuss how the recent resolution of the Slicing Conjecture, due to Klartag, Lehec, and Guan, implies smallest singular values estimates for a number of log-concave random matrix ensembles. In some cases, independent entries are no longer necessary!

Variational Inference: Posterior Threshold Improves Network Clustering Accuracy in Sparse Regimes

Speaker: 

Can M. Le

Institution: 

UC Davis

Time: 

Monday, November 17, 2025 - 2:00pm to 3:00pm

Location: 

340P Rowland Hall

Variational inference has been widely used in machine learning literature to fit various Bayesian models. In network analysis, this method has been successfully applied to solve community detection problems. Although these results are promising, their theoretical support is limited to relatively dense networks, an assumption that may not hold for real networks. In addition, recent studies have shown that the variational loss surface has many saddle points, which can significantly impact its performance, especially when applied to sparse networks. This paper proposes a simple method to improve the variational inference approach by hard thresholding the posterior of the community assignment after each iteration. We show that the proposed method can accurately recover the true community labels, even when the average node degree of the network is bounded and the initialization is arbitrarily close to random guessing. An extensive numerical study further confirms the advantage of the proposed method over the classical variational inference and other algorithms.

From mixing time of Markov chains to Tracy-Widom law of inhomogeneous random matrices

Speaker: 

Guangyi Zou

Institution: 

UC Irvine

Time: 

Monday, October 13, 2025 - 2:00pm to 3:00pm

Location: 

340P Rowland Hall

Random matrix statistics appear across diverse fields and are now recognized as exhibiting universal behavior. However, even within random matrix theory itself, the mechanism underlying the universality of local eigenvalue statistics beyond the mean-field setting remains poorly understood.

In this talk, we consider symmetric and Hermitian random matrices whose entries are independent random variables with an arbitrary variance pattern. Under a novel Short-to-Long Mixing condition—sharp in the sense that it excludes any deterministic correction at the spectral edge—we establish GOE/GUE edge universality for such inhomogeneous random matrices, which may be sparse or far from the classical mean-field regime. This condition reduces the universality problem to verifying the mixing properties of Markov chains defined by the variance profile matrix. This talk is based on joint work with Dang-Zheng Liu.

Precise Error Rates for Computationally Efficient Testing

Speaker: 

Alex Wein

Institution: 

UC Davis

Time: 

Monday, November 24, 2025 - 2:00pm to 3:00pm

Location: 

340P Rowland Hall

We consider one of the most basic high-dimensional testing problems: that of detecting the presence of a rank-1 "spike" in a random Gaussian (GOE) matrix. When the spike has structure such as sparsity, inherent statistical-computational tradeoffs are expected. I will discuss some precise results about the computational complexity, arguing that the so-called "linear spectral statistics" achieve the best possible tradeoff between type I & II errors among all polynomial-time algorithms, even though an exponential-time algorithm can do better. This is based on https://arxiv.org/abs/2311.00289 with Ankur Moitra which uses a version of the low-degree polynomial heuristic, as well as forthcoming work with Ansh Nagda which gives a stronger form of reduction-based hardness.

Pages

Subscribe to RSS - Combinatorics and Probability