Set representation of sparse graphs and hypergraphs

Speaker: 

Marcelo Sales

Institution: 

UCI

Time: 

Wednesday, November 8, 2023 - 2:00pm to 3:00pm

Location: 

510R Rowland Hall

Let $G$ be a graph on $n$ vertices. Given an integer $t$, we say that a family of sets $\{A_x\}_{x \in V}\subset 2^{[t]}$ is a set representation of $G$ if $$xy \in E(G) \iff A_x \cap A_y = \emptyset$$ Let $\overline{\theta}_1(G)$ be the smallest integer $t$ with such representation. In this talk I will discuss some of the bounds on $\overline{\theta}_1$ for sparse graphs and related problems. Joint work with Basu, Molnar, Rödl and Schacht.

Probabilistic laws on infinite groups

Speaker: 

Gil Goffer

Institution: 

UCSD

Time: 

Wednesday, November 15, 2023 - 2:00pm to 3:00pm

Host: 

Location: 

510R Rowland Hall

In various cases, a law that holds in a group with high probability, must actually hold for all elements. For instance, a finite group in which the commutator law [x,y]=1 holds with probability at least 5/8, must be abelian. For infinite groups, one needs to work a bit harder to define the probability that a given law holds. One natural way is by sampling a random element uniformly from the r-ball in the Cayley graph and taking r to infinity; another way is by sampling elements using random walks. It was asked by Amir, Blachar, Gerasimova, and Kozma whether a law that holds with probability 1, must actually hold globally, for all elements. In a recent joint work with Be’eri Greenfeld, we give a negative answer to their question.

In this talk I will give an introduction to probabilistic group laws and present a finitely generated group that satisfies the law x^p=1 with probability 1, but yet admits no group law that holds for all elements.

Deep Generative Models in Infinite-Dimensional Spaces

Speaker: 

Gavin Kerrigan

Institution: 

UCI

Time: 

Wednesday, November 29, 2023 - 2:00pm to 3:00pm

Location: 

510R Rowland Hall

Deep generative models have seen a meteoric rise in capabilities across a wide array of domains, ranging from natural language and vision to scientific applications such as precipitation forecasting and molecular generation. However, a number of important applications focus on data which is inherently infinite-dimensional, such as time-series, solutions to partial differential equations, and audio signals. This relatively under-explored class of problems poses unique theoretical and practical challenges for generative modeling. In this talk, we will explore recent developments for infinite-dimensional generative models, with a focus on diffusion-based methodologies. 

Chemical distance for 2d critical percolation

Speaker: 

Lily Reeves

Institution: 

Caltech

Time: 

Wednesday, November 1, 2023 - 2:00pm to 3:00pm

Location: 

510R Rowland Hall

Percolation clusters induce an intrinsic graph distance called the chemical distance. Besides its mathematical appeal, the chemical distance is connected to the study of random walks on critical percolation clusters. In this talk, I will begin with a brief survey on the chemical distance. Then, I will zoom in to the progress and challenges in the 2d critical regime. A portion of this talk is based on joint work with Philippe Sosoe.

Stochastic algorithms for quantizing neural networks

Speaker: 

Rayan Saab

Institution: 

UCSD

Time: 

Wednesday, December 6, 2023 - 2:00pm

Location: 

510R Rowland Hall

Neural networks are highly non-linear functions often parametrized by a staggering number of weights. Miniaturizing these networks and implementing them in hardware is a direction of research that is fueled by a practical need, and at the same time connects to interesting mathematical problems. For example, by quantizing, or replacing the weights of a neural network with quantized (e.g., binary) counterparts, massive savings in cost, computation time, memory, and power consumption can be attained. Of course, one wishes to attain these savings while preserving the action of the function on domains of interest. 
We discuss connections to problems in discrepancy theory, present data-driven and computationally efficient stochastic methods for quantizing the weights of already trained neural networks and we prove that our methods have favorable error guarantees under a variety of assumptions.  

Spectrahedral Geometry of Graph Sparsifiers

Speaker: 

Catherine Babecki

Institution: 

Caltech

Time: 

Wednesday, October 25, 2023 - 2:00pm

Location: 

510R Rowland Hall

We propose an approach to graph sparsification based on the idea of preserving the smallest $k$ eigenvalues and eigenvectors of the Graph Laplacian. This is motivated by the fact that small eigenvalues and their associated eigenvectors tend to be more informative of the global structure and geometry of the graph than larger eigenvalues and their eigenvectors.  The set of all weighted subgraphs of a graph $G$ that have the same first $k$ eigenvalues (and eigenvectors) as $G$ is the intersection of a polyhedron with a cone of positive semidefinite matrices. We discuss the geometry of these sets and deduce the natural scale of $k$. Various families of graphs illustrate our construction.

 

Structure preservation via the Wasserstein distance

Speaker: 

Shahar Mendelson

Institution: 

Australian National University

Time: 

Wednesday, October 4, 2023 - 2:00pm to 3:00pm

Host: 

Location: 

510R Rowland Hall

Consider an isotropic measure $\mu$ on $\mathbb R^d$ (i.e., centered and whose covariance is the identity) and let $X_1,...,X_m$ be independent, selected according to $\mu$. If $\Gamma$ is the random operator whose rows are $X_i/\sqrt{m}$, how does the image of the unit sphere under $\Gamma$ typically look like? For example, if the extremal singular values of $\Gamma$ are close to 1, then this random set is "well approximated" by a d-dimensional sphere, and vice-versa. But is it possible to give a more accurate description of that set? I will show that under minimal assumptions on $\mu$, with high probability and uniformly in a unit vector $t$, each vector $\Gamma t$ inherits the structure of the one-dimensional marginal $\langle X,t\rangle$ in a strong sense. If time permits I will also present a few generalisations of this fact - for an arbitrary indexing set. (A joint work with D. Bartl.)

Refine Dimension Reduction using Probabilistic Surrogate Models

Speaker: 

Hengrui Luo

Institution: 

Lawrence Berkeley National Laboratory and UCB

Time: 

Wednesday, May 31, 2023 - 1:00pm

Location: 

440R Rowland Hall

We introduce an efficient and robust auto-tuning framework for hyperparameter selection in dimension reduction (DR) algorithms, focusing on large-scale datasets and arbitrary performance metrics. By leveraging Bayesian optimization with a surrogate model, our approach enables efficient hyperparameter selection with multi-objective trade-offs and allows us to perform data-driven sensitivity analysis. By incorporating normalization and subsampling, the proposed framework demonstrates versatility and efficiency, as shown in applications to visualization techniques such as t-SNE and UMAP. We evaluate our results on various synthetic and real-world datasets using multiple quality metrics, providing a robust and efficient solution for hyperparameter selection in DR algorithms.

The geometry of bilinear forms in extremal graph theory

Speaker: 

Sam Mattheus

Institution: 

UCSD

Time: 

Wednesday, May 17, 2023 - 1:00pm to 2:00pm

Host: 

Location: 

440R Rowland Hall

Problems in extremal graph theory typically aim to maximize some graph parameters under local restrictions. In order to prove lower bounds for these kinds of problems, several techniques have been developed. The most popular one, initiated by Paul Erdős, is the probabilistic method. While this technique has enjoyed tremendous success, it does not always provide sharp lower bounds. Frequently, algebraically and geometrically defined graphs outperform random graphs. We will show how historically, geometry over finite fields has been a rich source of such graphs. I will show a broad class of graphs defined from the geometry of finite fields, which has found several recent applications in extremal graph theory. Often, certain interesting families of graphs had in fact already been discovered and studied, years before their value in extremal graph theory was realized. I will demonstrate some instances of this phenomenon as well, which indicates that there might still be uncharted territory to explore.

On sharp isoperimetric inequalities on the hypercube

Speaker: 

Paata Ivanisvili

Institution: 

UCI

Time: 

Wednesday, April 26, 2023 - 11:30am to 12:30pm

Location: 

RH 340P

I will talk about sharp isoperimetric inequalities on the hypercube refining several
results due to Kahn--Park and Talagrand.  In the second half of the talk I will present  Talagrand's isoperimetric inequalities
for functions taking values in a Banach space having finite cotype. In addition to that, several different proofs
 of recently resolved Talagrand's conjecture will be presented. 

This is joint with David Beltran and José Ramón Madrid Padilla. 

Pages

Subscribe to RSS - Combinatorics and Probability