# Invertibility of adjacency matrices of random graphs

## Speaker:

## Speaker Link:

## Institution:

## Time:

## Host:

## Location:

Consider an adjacency matrix of a bipartite, directed, or undirected Erdos-Renyi random graph. If the average degree of a vertex is large enough, then such matrix is invertible with high probability. As the average degree decreases, the probability of the matrix being singular increases, and for a sufficiently small average degree, it becomes singular with probability close to 1. We will discuss when this transition occurs, and what the main reason for the singularity of the adjacency matrix is. This is a joint work with Anirban Basak.

# Hashing II. Applications.

## Speaker:

## Speaker Link:

## Institution:

## Time:

## Location:

This talk will focus on applications of hashing. We use Leftover Hash Lemma to count linearly independent polynomials defined on a given set. From this we will derive a recent result of Abbe, Shpilka and Wigderson on linear independence of random tensors. Unfortunately, methods based on hashing only work over finite fields. A totally different approach to random tensors was found by Pierre Baldi and myself, which I will explain in detail.

# Hashing

## Speaker:

## Speaker Link:

## Institution:

## Time:

## Location:

Hashing is a technique widely used in coding theory (an area of computer science) and in cryptography. Although hashing is an interesting mathematical object, it is surprisingly little known to the "mainstream" mathematicians. I will focus on one specific result on hasing, namely the Leftover Hash Lemma. We will state it as a result in extremal combinatorics, give a probabilistic proof of it, and relate it to another fundamental result in extremal combinatorics, the Sauer-Shelah Lemma.

# The conjugate gradient algorithm on random matrices

## Speaker:

## Institution:

## Time:

## Location:

Numerical analysis and random matrix theory have long been coupled, going (at least) back to the seminal work of Goldstine and von Neumann (1951) on the condition number of random matrices. The works of Trotter (1984) and Silverstein (1985) incorporate "numerical techniques" to assist in the analysis of random matrices. One can also consider the problem of computing distributions (e.g. Tracy-Widom) from random matrix theory. In this talk, I will discuss a different numerical analysis problem: using random matrices to analyze the runtime (or halting time) of numerical algorithms. I will focus on recent results for the conjugate gradient algorithm including a proof that the halting time is almost deterministic.

# Phase transitions and conic geometry

## Speaker:

## Speaker Link:

## Institution:

## Time:

## Host:

## Location:

A phase transition is a sharp change in the behavior of a mathematical model as one of its parameters varies. This talk describes a striking phase transition that takes place in conic geometry. First, we will explain how to assign a notion of "dimension" to a convex cone. Then we will use this notion of "dimension" to see that two randomly oriented convex cones share a ray with probability close to zero or close to one. This fact has implications for many questions in signal processing. In particular, it yields a complete solution of the "compressed sensing" problem about when we can recover a sparse signal from random measurements. Based on joint works with Dennis Amelunxen, Martin Lotz, Mike McCoy, and Samet Oymak.

# Dynamic embedding of motifs into networks

## Speaker:

## Institution:

## Time:

## Host:

## Location:

We study various structural information of a large network $G$ by randomly embedding a small motif $F$ of choice. We propose two randomized algorithms to effectively sample such a random embedding by a Markov chain Monte Carlo method. Time averages of various functionals of these chains give structural information on $G$ via conditional homomorphism densities and density profiles of its filtration. We show such observables are stable with respect to various notions of network distance. Our efficient sampling algorithm and stability inequalities allow us to use our techniques for hypothesis testing on and hierarchical clustering of large networks. We demonstrate this by analyzing both synthetic and real world network data. Join with Facundo Memoli and David Sivakoff.

# Random matrix perturbations

## Speaker:

## Speaker Link:

## Institution:

## Time:

## Host:

## Location:

Computing the eigenvalues and eigenvectors of a large matrix is a basic task in high dimensional data analysis with many applications in computer science and statistics. In practice, however, data is often perturbed by noise. A natural question is the following: How much does a small perturbation to the matrix change the eigenvalues and eigenvectors? In this talk, I will consider the case where the perturbation is random. I will discuss perturbation results for the eigenvalues and eigenvectors as well as for the singular values and singular vectors. This talk is based on joint work with Van Vu, Ke Wang, and Philip Matchett Wood.