# A quantum algorithm for learning a hidden graph of bounded degree

## Speaker:

## Institution:

## Time:

## Location:

We are presented with a graph, $G$, on $n$ vertices and $m$ edges whose edge set is unknown. Our goal is to learn the edges of $G$ with as few queries to an oracle as possible. When we submit a set $S$ of vertices to the oracle, it tells us whether or not $S$ induces at least one edge in $G$. This so-called OR-query model has been well studied, with Angluin and Chen giving an upper bound on the number of queries needed of $O(m \log n)$ for a general graph $G$ with $m$ edges.

When we allow ourselves to make *quantum* queries (we may query subsets in superposition), then we can achieve speedups over the best possible classical algorithms. In the case where $G$ has maximum degree $d$ and is $O(1)$-colorable, Montanaro and Shao presented an algorithm that learns the edges of $G$ in at most $\tilde{O}( d^2 m^{3/4} )$ quantum queries. This gives an upper bound of $\tilde{O}( m^{3/4} )$ quantum queries when $G$ is a matching or a Hamiltonian cycle, which is significantly larger than the lower bound of $\Omega( \sqrt{m} )$ queries given by Ambainis and Montanaro.

We improve on the work of Montanaro and Shao in the case where $G$ has bounded degree. In particular, we present a randomized algorithm that, with high probability, learns cycles and matchings in $\tilde{O}( \sqrt{m} )$ quantum queries, matching the theoretical lower bound up to logarithmic factors.

# TBA

## Speaker:

## Speaker Link:

## Institution:

## Time:

## Host:

## Location:

# Approximately Hadamard matrices and random frames

## Speaker:

## Speaker Link:

## Institution:

## Time:

## Host:

## Location:

We will discuss a problem concerning random frames which arises in signal processing. A frame is an overcomplete set of vectors in the n-dimensional linear space which allows a robust decomposition of any vector in this space as a linear combination of these vectors. Random frames are used in signal processing as a means of encoding since the loss of a fraction of coordinates does not prevent the recovery. We will discuss a question when a random frame contains a copy of a nice (almost orthogonal) basis.

Despite the probabilistic nature of this problem it reduces to a completely deterministic question of existence of approximately Hadamard matrices. An n by n matrix with plus-minus 1 entries is called Hadamard if it acts on the space as a scaled isometry. Such matrices exist in some, but not in all dimensions. Nevertheless, we will construct plus-minus 1 matrices of every size which act as approximate scaled isometries. This construction will bring us back to probability as we will have to combine number-theoretic and probabilistic methods.

Joint work with Xiaoyu Dong.

# TBA

## Speaker:

## Speaker Link:

## Institution:

## Time:

## Host:

## Location:

# A Theory of Neuronal Synaptic Balance

## Speaker:

## Speaker Link:

## Institution:

## Time:

## Host:

## Location:

AI today can pass the Turing test and is in the process of transforming science, technology, society, humans, and beyond.

Surprisingly, modern AI is built out of two very simple and old ideas, rebranded as deep learning: neural networks and

gradient descent learning. When a typical feed-forward neural network is trained by gradient descent, with an L2 regularizer

to avoid overly large synaptic weights, a strange phenomenon occurs: at the optimum, each neuron becomes "balanced"

in the sense that the L2 norm of its incoming synaptic weights becomes equal to the L2 norm of its outgoing synaptic weights. We develop a theory that explains this phenomenon and exposes its generality. Balance emerges with a variety of activation functions, a variety of regularizers including all Lp regularizers, and a variety of networks including recurrent networks. A simple local balancing algorithm can be applied to any neuron and at any time, instead of just at the optimum. Most remarkably, stochastic iterated application of the local balancing algorithm always converges to a unique, globally balanced, state.

# Homological Percolation on a Torus

## Speaker:

## Speaker Link:

## Institution:

## Time:

## Host:

## Location:

Many well-studied properties of random graphs have interesting generalizations to higher dimensions in random simplicial complexes. We will discuss a version of percolation in which, instead of edges, we add two (or higher)-dimensional cubes to a subcomplex of a large torus at random. In this setting, we see a phase transition that marks the appearance of giant "sheets," analogous to the appearance of giant components in graph models. This phenomenon is most naturally described in the language of algebraic topology, but this talk will not assume any topological background.

Based on joint work with Ben Schweinhart and Matt Kahle.

# Erdos-Kac Central Limit Theorem

## Speaker:

## Speaker Link:

## Institution:

## Time:

## Host:

## Location:

The Erdos-Kac Central Limit Theorem says that if one selects an integer at random from 1 to N, then the number of distinct prime divisors of this number satisfies a Central Limit Theorem. We (the speaker in joint work with Tom Mountford) give new proof of this result using the Riemann zeta distribution and a Tauberian Theorem. The proof generalizes easily to other situations such as polynomials over a finite field or ideals in a number field.

# TBA

## Speaker:

## Speaker Link:

## Institution:

## Time:

## Host:

## Location:

# Asymmetry helps: Non-backtracking spectral methods for sparse matrices and tensors

## Speaker:

## Speaker Link:

## Institution:

## Time:

## Location:

The non-backtracking operator is a non-Hermitian matrix associated with an undirected graph. It has become a powerful tool in the spectral analysis of random graphs. Recently, many new results for sparse Hermitian random matrices have been proved for the corresponding non-backtracking operator and translated into a statement of the original matrices through the Ihara-Bass formula. In another direction, efficient algorithms based on the non-backtracking matrix have successfully reached optimal sample complexity in many sparse low-rank estimation problems. I will talk about my recent work with the help of the non-backtracking operator. This includes the Bai-Yin law for sparse random rectangular matrices, hypergraph community detection, and tensor completion.