A classical problem in combinatorial geometry, posed by Erdos in 1946, asks to determine the maximum number of unit segments in a set of $n$ points in the plane. Since then a great variety of extremal problems in finite planar point sets have been studied. Here, we look at such questions concerning triangles. Among others we answer the following question asked by
Erdos and Purdy almost 50 years ago: Given $n$ points in the plane, how many triangles can be approximate congruent to equilateral triangles?
For our proofs we use hypergraph Turan theory. This is joint work with Balogh and Dumitrescu.
The Bethe–Hessian matrix, introduced by Saade, Krzakala, and Zdeborová (2014), is a Hermitian operator tailored for spectral clustering on sparse networks. Unlike the non-symmetric, high-dimensional non-backtracking operator, this matrix is conjectured to reach the Kesten–Stigum threshold in the sparse stochastic block model (SBM), yet a fully rigorous analysis of the method has remained open. Beyond its practical utility, this sparse random matrix exhibits a surprising phenomenon called "one-sided eigenvector localization" that has not been fully explained.
We present the first rigorous analysis of the Bethe–Hessian spectral method for the SBM and partially answer some open questions in Saade, Krzakala, and Zdeborová (2014). Joint work with Ludovic Stephan.
There is an infinite pool of balls and an infinite number of boxes. Let's randomly drop n balls into the boxes and study the number of occupied boxes. Classical limit theorems, the law of large numbers and the central limit theorem, are non-trivial, but known. Can we now prove the law of the iterated logarithm? I will show how to do this using tools that have much wider applicability. The talk is based on joint work with Dariusz Buraczewski (Wroclaw, Poland) and Alexander Iksanov (Kyiv, Ukraine).
Is there a natural way to order data in dimension greater than one? The approach based on the notion of data depth, often associated with the name of John Tukey, is among the most popular. Tukey’s depth has found applications in robust statistics, the study of elections and social choice, and graph theory. We will give an introduction to the topic (with an emphasis on robust statistics), describe some remaining open questions as well as our recent progress towards the solutions.
This talk is based on the joint work with Yinan Shen.
In joint work with Chi-Fang Chen, Joel Tropp, and Ramon van Handel, we developed a new method for establishing strong convergence. In this talk I will explain what strong convergence is, and, as an application of our results I will discuss a simple way of generating nearly optimal expanders using very little randomness.
We will discuss recent advances in the compression of pre-trained neural networks using both novel and existing computationally efficient algorithms. The approaches we consider leverage sparsity, low-rank approximations of weight matrices, and weight quantization to achieve significant reductions in model size, while maintaining performance. We provide rigorous theoretical error guarantees as well as numerical experiments.
Abstract: In this talk, we present our results for the 1-bit phase retrieval problem: the recovery of a possibly sparse signal $x$ from the observations $y=sign(|Ax|-1)$. This problem is closely related to 1-bit compressed sensing where we observe $y=sign(Ax)$, and phase retrieval where we observe $y=|Ax|$. We show that the major findings in 1-bit compressed sensing theory, including hyperplane tessellation, optimal rate and a very recent efficient and optimal algorithm, can be developed in this phase retrieval setting. This is a joint work with Ming Yuan: https://arxiv.org/abs/2405.04733.
In this talk, we present a new proof of the delocalization conjecture for random band matrices. The conjecture asserts that for an $N\times N$ random matrix with bandwidth $W$, the eigenvectors in the bulk of the spectrum are delocalized provided that $W \gg N^{1/2}$. Moreover, in this regime, the eigenvalue distribution aligns with that of Gaussian random ensembles (i.e., GOE or GUE). Our proof employs a novel loop hierarchy method and leverages the sum-zero property, a concept that was instrumental in the previous work on high-dimensional random matrices.
We characterize the power of constant-depth Boolean circuits in generating uniform symmetric distributions. Let $f:\{0,1\}^m \rightarrow \{0,1\}^n$ be a Boolean function where each output bit of $f$ depends only on $O(1)$ input bits. Assume the output distribution of $f$ on uniform input bits is close to a uniform distribution $D$ with a symmetric support. We show that $D$ is essentially one of the following six possibilities: (1) point distribution on $0^n$, (2) point distribution on $1^n$, (3) uniform over $\{0^n,1^n\}$, (4) uniform over strings with even Hamming weights, (5) uniform over strings with odd Hamming weights, and (6) uniform over all strings. This confirms a conjecture of Filmus, Leigh, Riazanov, and Sokolov (RANDOM 2023).
Joint work with Daniel M. Kane and Anthony Ostuni.
I will discuss adapted transport theory and the adapted Wasserstein distance, which extend classical transport theory from probability measures to stochastic processes by incorporating the temporal flow of information. This adaptation addresses key limitations of classical transport when dealing with time-dependent data.
I will highlight how, unlike other topologies for stochastic processes, the adapted Wasserstein distance ensures continuity for fundamental probabilistic operations, including the Doob decomposition, optimal stopping, and stochastic control. Additionally, I will explore how adapted transport preserves many desirable properties of classical transport theory, making it a powerful tool for analyzing stochastic systems.