# The characteristic polynomial of sums of random permutations

Yizhe Zhu

UCI

## Time:

Wednesday, January 18, 2023 - 2:00pm to 3:00pm

## Location:

510R Rowland Hall

Let $A_n$ be the sum of $d$ permutation matrices of size $n\times n$, each drawn uniformly at random and independently. We prove that the normalized characteristic polynomial  $\frac{1}{\sqrt{d}}\det(I_n - z A_n/\sqrt{d})$ converges when $n\to \infty$ towards a random analytic function on the unit disk. As an application, we obtain an elementary proof of the spectral gap of random regular digraphs. Our results are valid both in the regime where $d$ is fixed and for $d$ slowly growing with $n$.

Joint work with Simon Coste and Gaultier Lambert. https://arxiv.org/abs/2204.00524

# Robustness of graph theory theorems

Jie Han

## Institution:

Beijing Institute of Technology

## Time:

Wednesday, February 1, 2023 - 2:00pm to 3:00pm

## Location:

510R Rowland Hall

Dirac's theorem says that any n-vertex graph with minimum degree at least n/2 contains a Hamiltonian cycle. A robust version of Dirac's theorem was obtained by Krivelevich, Lee and Sudakov: for such graphs, if we take a random subgraph where every edge is included with probability Clog(n)/n, for some large fixed constant C, then whp the resulting graph is still Hamiltonian. We will discuss some recent results along this line.

# Learning quantum observables of low degrees from a logarithmic number of random queries

Haonan Zhang

UCI

## Time:

Wednesday, January 11, 2023 - 2:00pm to 3:00pm

## Location:

510R Rowland Hall

A fundamental problem from computational learning theory is to well-reconstruct an unknown function on the discrete hypercubes. One classical result of this problem for the random query model is the low-degree algorithm of Linial, Mansour and Nisan in 1993. This was improved exponentially by Eskenazis and Ivanisvili in 2022 using a family of polynomial inequalities going back to Littlewood in 1930. Recently, quantum analogs of such polynomial inequalities were conjectured by Rouzé, Wirth and Zhang (2022). This conjecture was resolved by Huang, Chen and Preskill (2022) without knowing it when studying learning problems of quantum dynamics. In this talk, I will discuss another proof of this conjecture that is simpler and gives better estimates. As an application, it allows us to recover the low-degree algorithm of Eskenazis and Ivanisvili in the quantum setting. This is based on arXiv:2210.14468, joint work with Alexander Volberg (MSU).

# TBA

Todd Kemp

UC San Diego

## Time:

Wednesday, March 1, 2023 - 2:00pm to 3:00pm

## Location:

510R Rowland Hall

TBA

# TBA

Andrew Suk

UCSD

## Time:

Tuesday, March 15, 2022 - 2:00pm to 3:00pm

RH 510

# TBA

## Institution:

Harvey Mudd College

## Time:

Wednesday, February 22, 2023 - 2:00pm to 3:00pm

## Location:

510R Rowland Hall

# TBA

Boris Bukh

## Institution:

Carnegie Mellon University

## Time:

Wednesday, March 8, 2023 - 2:00pm to 3:00pm

RH 510R

# Binary iterative hard thresholding for 1-bit Compressed Sensing

Arya Mazumdar

UCSD

## Time:

Wednesday, January 25, 2023 - 2:00pm

## Location:

510R Rowland Hall

Compressed sensing has been a very successful high-dimensional signal acquisition and recovery technique that relies on linear operations. However, the actual measurements of signals have to be quantized before storing or processing. 1(One)-bit compressed sensing is a heavily quantized version of compressed sensing, where each linear measurement of a signal is reduced to just one bit: the sign of the measurement. Once enough of such measurements are collected, the recovery problem in 1-bit compressed sensing aims to find the original signal with as much accuracy as possible.

For recovery of sparse vectors, a popular reconstruction method from 1-bit measurements is the binary iterative hard thresholding (BIHT) algorithm. The algorithm is a simple projected sub-gradient descent method, and is known to converge well empirically, despite the nonconvexity of the problem. The convergence property of BIHT was not theoretically justified, except with an exorbitantly large number of measurements (i.e., a number of measurement greater than max{k^10,24^48,k^3.5/ϵ}, where k is the sparsity, ϵ denotes the approximation error, and even this expression hides other factors). In this paper we show that the BIHT algorithm converges with only Õ(k/ϵ) measurements. Note that, this dependence on k and ϵ is optimal for any recovery method in 1-bit compressed sensing. With this result, to the best of our knowledge, BIHT is the only practical and efficient (polynomial time) algorithm that requires the optimal number of measurements in all parameters (both k and ϵ). This is also an example of a gradient descent algorithm converging to the correct solution for a nonconvex problem, under suitable structural conditions.

Joint work with Nami Matsumoto.