We are presented with a graph, $G$, on $n$ vertices and $m$ edges whose edge set is unknown. Our goal is to learn the edges of $G$ with as few queries to an oracle as possible. When we submit a set $S$ of vertices to the oracle, it tells us whether or not $S$ induces at least one edge in $G$. This so-called OR-query model has been well studied, with Angluin and Chen giving an upper bound on the number of queries needed of $O(m \log n)$ for a general graph $G$ with $m$ edges.
When we allow ourselves to make *quantum* queries (we may query subsets in superposition), then we can achieve speedups over the best possible classical algorithms. In the case where $G$ has maximum degree $d$ and is $O(1)$-colorable, Montanaro and Shao presented an algorithm that learns the edges of $G$ in at most $\tilde{O}( d^2 m^{3/4} )$ quantum queries. This gives an upper bound of $\tilde{O}( m^{3/4} )$ quantum queries when $G$ is a matching or a Hamiltonian cycle, which is significantly larger than the lower bound of $\Omega( \sqrt{m} )$ queries given by Ambainis and Montanaro.
We improve on the work of Montanaro and Shao in the case where $G$ has bounded degree. In particular, we present a randomized algorithm that, with high probability, learns cycles and matchings in $\tilde{O}( \sqrt{m} )$ quantum queries, matching the theoretical lower bound up to logarithmic factors.
Large-scale linear systems, Ax=b, frequently arise in data science and scientific computing at massive scales, thus demanding effective iterative methods to solve them. Often, these systems are noisy due to operational errors or faulty data-collection processes. In the past decade, the randomized Kaczmarz algorithm (RK) was studied extensively as an efficient iterative solver for such systems. However, the convergence study of RK in the noisy regime is limited and considers measurement noise in the right-hand side vector, b. Unfortunately, that is not always the case, and the coefficient matrix A can also be noisy. In this talk, we motivate and discuss doubly noise linear systems and the performance of the Kaczmarz algorithm applied to such systems.
We will discuss a problem concerning random frames which arises in signal processing. A frame is an overcomplete set of vectors in the n-dimensional linear space which allows a robust decomposition of any vector in this space as a linear combination of these vectors. Random frames are used in signal processing as a means of encoding since the loss of a fraction of coordinates does not prevent the recovery. We will discuss a question when a random frame contains a copy of a nice (almost orthogonal) basis.
Despite the probabilistic nature of this problem it reduces to a completely deterministic question of existence of approximately Hadamard matrices. An n by n matrix with plus-minus 1 entries is called Hadamard if it acts on the space as a scaled isometry. Such matrices exist in some, but not in all dimensions. Nevertheless, we will construct plus-minus 1 matrices of every size which act as approximate scaled isometries. This construction will bring us back to probability as we will have to combine number-theoretic and probabilistic methods.
The famous no-three-in-line problem by Dudeney more than a century ago asks whether one can select $2n$ points from the grid $[n]^2$ such that no three are collinear. We present two results related to this problem. First, we give a non-trivial upper bound for the maximum size of a set in $[n]^4$ such that no four are coplanar. Second, we characterize the behavior of the maximum size of a subset such that no three are collinear in a random set of $\mathbb{F}_q^2$, that is, the plane over the finite field of order $q$. We discuss their proofs and related open problems.
AI today can pass the Turing test and is in the process of transforming science, technology, society, humans, and beyond.
Surprisingly, modern AI is built out of two very simple and old ideas, rebranded as deep learning: neural networks and
gradient descent learning. When a typical feed-forward neural network is trained by gradient descent, with an L2 regularizer
to avoid overly large synaptic weights, a strange phenomenon occurs: at the optimum, each neuron becomes "balanced"
in the sense that the L2 norm of its incoming synaptic weights becomes equal to the L2 norm of its outgoing synaptic weights. We develop a theory that explains this phenomenon and exposes its generality. Balance emerges with a variety of activation functions, a variety of regularizers including all Lp regularizers, and a variety of networks including recurrent networks. A simple local balancing algorithm can be applied to any neuron and at any time, instead of just at the optimum. Most remarkably, stochastic iterated application of the local balancing algorithm always converges to a unique, globally balanced, state.
Many well-studied properties of random graphs have interesting generalizations to higher dimensions in random simplicial complexes. We will discuss a version of percolation in which, instead of edges, we add two (or higher)-dimensional cubes to a subcomplex of a large torus at random. In this setting, we see a phase transition that marks the appearance of giant "sheets," analogous to the appearance of giant components in graph models. This phenomenon is most naturally described in the language of algebraic topology, but this talk will not assume any topological background.
Based on joint work with Ben Schweinhart and Matt Kahle.
The Erdos-Kac Central Limit Theorem says that if one selects an integer at random from 1 to N, then the number of distinct prime divisors of this number satisfies a Central Limit Theorem. We (the speaker in joint work with Tom Mountford) give new proof of this result using the Riemann zeta distribution and a Tauberian Theorem. The proof generalizes easily to other situations such as polynomials over a finite field or ideals in a number field.
A 0/1 polytope is the convex hull of a set of 0/1 d-dimensional vectors. A conjecture of Milena Mihail and Umesh Vazirani says that the graph of vertices and edges of every 0/1 polytope is highly connected. Specifically, it states that the edge expansion of the graph of every 0/1 polytope is at least one. Any lower bound on the edge expansion gives an upper bound for the mixing time of a random walk on the graph of the polytope. Such random walks are important because they can be used to generate an element from a set of combinatorial objects uniformly at random. A weaker form of the conjecture of Mihail and Vazirani says that the edge expansion of the graph of a 0/1 polytope in R^d is greater than 1 over some polynomial function of d. This weaker version of the conjecture would suffice for all applications. Our main result is that the edge expansion of the graph of a random 0/1 polytope in R^d is at least 1/12d with high probability. This is joint work with Brett Leroux.
The non-backtracking operator is a non-Hermitian matrix associated with an undirected graph. It has become a powerful tool in the spectral analysis of random graphs. Recently, many new results for sparse Hermitian random matrices have been proved for the corresponding non-backtracking operator and translated into a statement of the original matrices through the Ihara-Bass formula. In another direction, efficient algorithms based on the non-backtracking matrix have successfully reached optimal sample complexity in many sparse low-rank estimation problems. I will talk about my recent work with the help of the non-backtracking operator. This includes the Bai-Yin law for sparse random rectangular matrices, hypergraph community detection, and tensor completion.
In this talk, I will outline the proof that $r(4,t) = t^{3 - o(1)}$, which solves a long-standing open problem of Erdos. The proof combines a variety of techniques, and lends itself to improving bounds on various other Ramsey numbers $r(F,t)$ where $F$ is a fixed graph., as well as applications to a variety of extremal and Ramsey problems.