A quantum algorithm for learning a hidden graph of bounded degree

Speaker: 

Liam Hardiman

Institution: 

UCI

Time: 

Wednesday, February 14, 2024 - 2:00pm

Location: 

510R Rowland Hall

We are presented with a graph, $G$, on $n$ vertices and $m$ edges whose edge set is unknown. Our goal is to learn the edges of $G$ with as few queries to an oracle as possible. When we submit a set $S$ of vertices to the oracle, it tells us whether or not $S$ induces at least one edge in $G$. This so-called OR-query model has been well studied, with Angluin and Chen giving an upper bound on the number of queries needed of $O(m \log n)$ for a general graph $G$ with $m$ edges.
   
When we allow ourselves to make *quantum* queries (we may query subsets in superposition), then we can achieve speedups over the best possible classical algorithms. In the case where $G$ has maximum degree $d$ and is $O(1)$-colorable, Montanaro and Shao presented an algorithm that learns the edges of $G$ in at most $\tilde{O}( d^2 m^{3/4} )$ quantum queries. This gives an upper bound of $\tilde{O}( m^{3/4} )$ quantum queries when $G$ is a matching or a Hamiltonian cycle, which is significantly larger than the lower bound of $\Omega( \sqrt{m} )$ queries given by Ambainis and Montanaro.

We improve on the work of Montanaro and Shao in the case where $G$ has bounded degree. In particular, we present a randomized algorithm that, with high probability, learns cycles and matchings in $\tilde{O}( \sqrt{m} )$ quantum queries, matching the theoretical lower bound up to logarithmic factors.

Approximately Hadamard matrices and random frames

Speaker: 

Mark Rudelson

Institution: 

University of Michigan

Time: 

Wednesday, February 28, 2024 - 2:00pm

Host: 

Location: 

510R Rowland Hall

We will discuss a problem concerning random frames which arises in signal processing. A frame is an overcomplete set of vectors in the n-dimensional linear space which allows a robust decomposition of any vector in this space as a linear combination of these vectors. Random frames are used in signal processing as a means of encoding since the loss of a fraction of coordinates does not prevent the recovery. We will discuss a question when a random frame contains a copy of a nice (almost orthogonal) basis.

Despite the probabilistic nature of this problem it reduces to a completely deterministic question of existence of approximately Hadamard matrices.  An n by n matrix with plus-minus 1 entries is called Hadamard if it acts on the space as a scaled isometry. Such matrices exist in some, but not in all dimensions. Nevertheless, we will construct plus-minus 1 matrices of every size which act as approximate scaled isometries. This construction will bring us back to probability as we will have to combine number-theoretic and probabilistic methods.

Joint work with Xiaoyu Dong.

A Theory of Neuronal Synaptic Balance

Speaker: 

Pierre Baldi

Institution: 

UCI

Time: 

Wednesday, February 7, 2024 - 2:00pm

Host: 

Location: 

510R Rowland Hall

AI today can pass the Turing test and is in the process of transforming science, technology, society, humans, and beyond.
Surprisingly, modern AI is built out of two very simple and old ideas, rebranded as deep learning: neural networks and
gradient descent learning. When a typical feed-forward neural network is trained by gradient descent, with an L2 regularizer
to avoid overly large synaptic weights, a strange phenomenon occurs: at the optimum, each neuron becomes "balanced"
in the sense that the L2 norm of its incoming synaptic weights becomes equal to the L2 norm of its outgoing synaptic weights. We develop a theory that explains this phenomenon and exposes its generality. Balance emerges with a variety of activation functions, a variety of regularizers including all Lp regularizers, and a variety of networks including recurrent networks. A simple local balancing algorithm can be applied to any neuron and at any time, instead of just at the optimum. Most remarkably, stochastic iterated application of the local balancing algorithm always converges to a unique, globally balanced, state.

Homological Percolation on a Torus

Speaker: 

Paul Duncan

Institution: 

Hebrew University of Jerusalem

Time: 

Wednesday, January 31, 2024 - 2:00pm to 3:00pm

Host: 

Location: 

510R Rowland Hall

Many well-studied properties of random graphs have interesting generalizations to higher dimensions in random simplicial complexes. We will discuss a version of percolation in which, instead of edges, we add two (or higher)-dimensional cubes to a subcomplex of a large torus at random. In this setting, we see a phase transition that marks the appearance of giant "sheets," analogous to the appearance of giant components in graph models. This phenomenon is most naturally described in the language of algebraic topology, but this talk will not assume any topological background. 

Based on joint work with Ben Schweinhart and Matt Kahle.

Erdos-Kac Central Limit Theorem

Speaker: 

Michael Cranston

Institution: 

UCI

Time: 

Wednesday, February 21, 2024 - 2:00pm

Host: 

Location: 

510R Rowland Hall

The Erdos-Kac Central Limit Theorem says that if one selects an integer at random from 1 to N, then the number of distinct prime divisors of this number satisfies a Central Limit Theorem. We (the speaker in joint work with Tom Mountford) give new proof of this result using the Riemann zeta distribution and a Tauberian Theorem. The proof generalizes easily to other situations such as polynomials over a finite field or ideals in a number field.

Asymmetry helps: Non-backtracking spectral methods for sparse matrices and tensors

Speaker: 

Yizhe Zhu

Institution: 

UCI

Time: 

Wednesday, October 18, 2023 - 2:00pm to 3:00pm

Location: 

510R Rowland Hall

The non-backtracking operator is a non-Hermitian matrix associated with an undirected graph. It has become a powerful tool in the spectral analysis of random graphs. Recently, many new results for sparse Hermitian random matrices have been proved for the corresponding non-backtracking operator and translated into a statement of the original matrices through the Ihara-Bass formula. In another direction, efficient algorithms based on the non-backtracking matrix have successfully reached optimal sample complexity in many sparse low-rank estimation problems. I will talk about my recent work with the help of the non-backtracking operator. This includes the Bai-Yin law for sparse random rectangular matrices, hypergraph community detection, and tensor completion. 

Pages

Subscribe to RSS - Combinatorics and Probability