Chemical distance for 2d critical percolation

Speaker: 

Lily Reeves

Institution: 

Caltech

Time: 

Wednesday, November 1, 2023 - 2:00pm to 3:00pm

Host: 

Location: 

510R Rowland Hall

Percolation clusters induce an intrinsic graph distance called the chemical distance. Besides its mathematical appeal, the chemical distance is connected to the study of random walks on critical percolation clusters. In this talk, I will begin with a brief survey on the chemical distance. Then, I will zoom in to the progress and challenges in the 2d critical regime. A portion of this talk is based on joint work with Philippe Sosoe.

Stochastic algorithms for quantizing neural networks

Speaker: 

Rayan Saab

Institution: 

UCSD

Time: 

Wednesday, December 6, 2023 - 2:00pm

Location: 

510R Rowland Hall

Neural networks are highly non-linear functions often parametrized by a staggering number of weights. Miniaturizing these networks and implementing them in hardware is a direction of research that is fueled by a practical need, and at the same time connects to interesting mathematical problems. For example, by quantizing, or replacing the weights of a neural network with quantized (e.g., binary) counterparts, massive savings in cost, computation time, memory, and power consumption can be attained. Of course, one wishes to attain these savings while preserving the action of the function on domains of interest. 
We discuss connections to problems in discrepancy theory, present data-driven and computationally efficient stochastic methods for quantizing the weights of already trained neural networks and we prove that our methods have favorable error guarantees under a variety of assumptions.  

Spectrahedral Geometry of Graph Sparsifiers

Speaker: 

Catherine Babecki

Institution: 

Caltech

Time: 

Wednesday, October 25, 2023 - 2:00pm

Host: 

Location: 

510R Rowland Hall

We propose an approach to graph sparsification based on the idea of preserving the smallest $k$ eigenvalues and eigenvectors of the Graph Laplacian. This is motivated by the fact that small eigenvalues and their associated eigenvectors tend to be more informative of the global structure and geometry of the graph than larger eigenvalues and their eigenvectors.  The set of all weighted subgraphs of a graph $G$ that have the same first $k$ eigenvalues (and eigenvectors) as $G$ is the intersection of a polyhedron with a cone of positive semidefinite matrices. We discuss the geometry of these sets and deduce the natural scale of $k$. Various families of graphs illustrate our construction.

 

Structure preservation via the Wasserstein distance

Speaker: 

Shahar Mendelson

Institution: 

Australian National University

Time: 

Wednesday, October 4, 2023 - 2:00pm to 3:00pm

Host: 

Location: 

510R Rowland Hall

Consider an isotropic measure $\mu$ on $\mathbb R^d$ (i.e., centered and whose covariance is the identity) and let $X_1,...,X_m$ be independent, selected according to $\mu$. If $\Gamma$ is the random operator whose rows are $X_i/\sqrt{m}$, how does the image of the unit sphere under $\Gamma$ typically look like? For example, if the extremal singular values of $\Gamma$ are close to 1, then this random set is "well approximated" by a d-dimensional sphere, and vice-versa. But is it possible to give a more accurate description of that set? I will show that under minimal assumptions on $\mu$, with high probability and uniformly in a unit vector $t$, each vector $\Gamma t$ inherits the structure of the one-dimensional marginal $\langle X,t\rangle$ in a strong sense. If time permits I will also present a few generalisations of this fact - for an arbitrary indexing set. (A joint work with D. Bartl.)

Refine Dimension Reduction using Probabilistic Surrogate Models

Speaker: 

Hengrui Luo

Institution: 

Lawrence Berkeley National Laboratory and UCB

Time: 

Wednesday, May 31, 2023 - 1:00pm

Location: 

440R Rowland Hall

We introduce an efficient and robust auto-tuning framework for hyperparameter selection in dimension reduction (DR) algorithms, focusing on large-scale datasets and arbitrary performance metrics. By leveraging Bayesian optimization with a surrogate model, our approach enables efficient hyperparameter selection with multi-objective trade-offs and allows us to perform data-driven sensitivity analysis. By incorporating normalization and subsampling, the proposed framework demonstrates versatility and efficiency, as shown in applications to visualization techniques such as t-SNE and UMAP. We evaluate our results on various synthetic and real-world datasets using multiple quality metrics, providing a robust and efficient solution for hyperparameter selection in DR algorithms.

The geometry of bilinear forms in extremal graph theory

Speaker: 

Sam Mattheus

Institution: 

UCSD

Time: 

Wednesday, May 17, 2023 - 1:00pm to 2:00pm

Host: 

Location: 

440R Rowland Hall

Problems in extremal graph theory typically aim to maximize some graph parameters under local restrictions. In order to prove lower bounds for these kinds of problems, several techniques have been developed. The most popular one, initiated by Paul Erdős, is the probabilistic method. While this technique has enjoyed tremendous success, it does not always provide sharp lower bounds. Frequently, algebraically and geometrically defined graphs outperform random graphs. We will show how historically, geometry over finite fields has been a rich source of such graphs. I will show a broad class of graphs defined from the geometry of finite fields, which has found several recent applications in extremal graph theory. Often, certain interesting families of graphs had in fact already been discovered and studied, years before their value in extremal graph theory was realized. I will demonstrate some instances of this phenomenon as well, which indicates that there might still be uncharted territory to explore.

On sharp isoperimetric inequalities on the hypercube

Speaker: 

Paata Ivanisvili

Institution: 

UCI

Time: 

Wednesday, April 26, 2023 - 11:30am to 12:30pm

Location: 

RH 340P

I will talk about sharp isoperimetric inequalities on the hypercube refining several
results due to Kahn--Park and Talagrand.  In the second half of the talk I will present  Talagrand's isoperimetric inequalities
for functions taking values in a Banach space having finite cotype. In addition to that, several different proofs
 of recently resolved Talagrand's conjecture will be presented. 

This is joint with David Beltran and José Ramón Madrid Padilla. 

Semidefinite programming on population clustering: a global analysis

Speaker: 

Shuheng Zhou

Institution: 

UC Riverside

Time: 

Wednesday, May 10, 2023 - 1:00pm to 2:00pm

Host: 

Location: 

440R Rowland Hall

In this paper, we consider the problem of partitioning a small data sample of size n drawn from a mixture of 2 sub-gaussian distributions. Our work is motivated by the application of clustering individuals according to their population of origin using markers, when the divergence between the two populations is small. We are interested in the case that individual features are of low average quality γ, and we want to use as few of them as possible to correctly partition the sample. We consider semidefinite relaxation of an integer quadratic program which is formulated essentially as finding the maximum cut on a graph where edge weights in the cut represent dissimilarity scores between two nodes based on their features. 

A small simulation result in Blum, Coja-Oghlan, Frieze and Zhou (2007, 2009) shows that  even when the sample size n is small, by
increasing p so that $np= \Omega(1/\gamma^2)$,  one can classify a mixture of two product populations using the spectral method therein with success rate reaching an ``oracle'' curve. There the ``oracle'' was computed  assuming that distributions were known,
where success rate means the ratio between correctly classified individuals and the sample size n. In this work, we show the theoretical underpinning of this observed concentration of measure phenomenon in high dimensions, simultaneously for the semidefinite optimization goal and the spectral method, where the input is based on the gram matrix computed from centered data. We allow a full range of tradeoffs between the sample size and the number of features such that the product of these two is lower bounded by $1/\gamma^2$ so long as the number of features p is lower bounded by $1/\gamma$.

Beyond the broken tetrahedron

Speaker: 

Bjarne Schuelke

Institution: 

Caltech

Time: 

Wednesday, April 12, 2023 - 1:00pm to 2:00pm

Host: 

Location: 

440R Rowland Hall

Here we consider the hypergraph Tur\'an problem in uniformly dense hypergraphs as was suggested by Erd\H{o}s and S\'os. Given a $3$-graph $F$, the uniform Tur\'an density $\pi_u(F)$ of $F$ is defined as the supremum over all $d\in[0,1]$ for which there is an $F$-free uniformly $d$-dense $3$-graph, where uniformly $d$-dense means that every linearly sized subhypergraph has density at least $d$. Recently, Glebov, Kr\'al', and Volec and, independently, Reiher, R\"odl, and Schacht proved that $\pi_u(K_4^{(3)-})=\frac{1}{4}$, solving a conjecture by Erd\H{o}s and S\'os. There are very few hypergraphs for which the uniform Tur\'an density is known. In this work, we determine the uniform Tur\'an density of the $3$-graph on five vertices that is obtained from $K_4^{(3)-}$ by adding an additional vertex whose link forms a matching on the vertices of $K_4^{(3)-}$. Further, we point to two natural intermediate problems on the way to determining $\pi_u(K_4^{(3)})$ and solve the first of these.

 

This talk is based on joint work with August Chen.

Sum-of-squares proofs for the trace power method

Speaker: 

Jonathan Shi

Institution: 

UCSD

Time: 

Wednesday, May 3, 2023 - 1:00pm to 2:00pm

Host: 

Location: 

440R Rowland Hall

The sum-of-squares framework converts proofs into algorithms so long as those proofs can be formulated as sum-of-squares proofs—proofs that demonstrate their conclusions via the non-negativity of square polynomials. This is a powerful framework that captures many inequalities known in analysis. We formulate sum-of-squares proofs for the trace power method by giving sum-of-squares proofs for non-commutative (matrix) inequalities including the Araki-Lieb-Thirring inequality and Holder's inequality for Schatten norms, and sketch how one might use these proofs to obtain algorithms to constructively find solutions to random optimization problems whose upper bounds can be proven by techniques in random matrix and spin glass theory. Based on work in progress with Juspreet Singh Sandhu.

Pages

Subscribe to RSS - Combinatorics and Probability