Eigenvalue gaps of sparse random matrices

Speaker: 

Kyle Luh

Institution: 

Harvard University

Time: 

Tuesday, October 15, 2019 - 11:00am to 12:00pm

Host: 

Location: 

RH 340N

We will discuss some recent work on quantifying the gaps between eigenvalues of sparse random matrices.  Before these results, it was not even known if the eigenvalues were distinct for this class of random matrices.  We will also touch upon how these results relate to random graphs, the graph isomorphism problem and nodal domains.  This is joint work with Van Vu and Patrick Lopatto.

Analyzing Hybrid Randomized and Greedy Projection Methods

Speaker: 

Jamie Haddock

Institution: 

UCLA

Time: 

Tuesday, October 1, 2019 - 11:00am to 12:00pm

Host: 

Location: 

RH 340N

Stochastic iterative algorithms have gained recent interest for solving large-scale systems of equations, Ax=y. One such example is the Randomized Kaczmarz (RK) algorithm, which acts only on single rows of the matrix A at a time. While RK randomly selects a row, Motzkin's algorithm employs a greedy row selection; meanwhile the Sampling Kaczmarz-Motzkin (SKM) algorithm combines these two strategies. In this talk, we present an improved convergence analysis for SKM which interpolates between RK and Motzkin's algorithm.  

Invertibility of adjacency matrices of random graphs

Speaker: 

Mark Rudelson

Institution: 

University of Michigan

Time: 

Tuesday, May 7, 2019 - 11:00am

Host: 

Location: 

RH 510M

 

Consider an adjacency matrix of a bipartite, directed, or undirected Erdos-Renyi random graph. If the average degree of a vertex is large enough, then such matrix is invertible with high probability. As the average degree decreases, the probability of the matrix being singular increases, and for a sufficiently small average degree, it becomes singular with probability close to 1. We will discuss when this transition occurs, and what the main reason for the singularity of the adjacency matrix is. This is a joint work with Anirban Basak.

Hashing II. Applications.

Speaker: 

Roman Vershynin

Institution: 

UCI

Time: 

Tuesday, February 26, 2019 - 11:00am to 11:50am

Location: 

306 RH

This talk will focus on applications of hashing. We use Leftover Hash Lemma to count linearly independent polynomials defined on a given set. From this we will derive a recent result of Abbe, Shpilka and Wigderson on linear independence of random tensors. Unfortunately, methods based on hashing only work over finite fields. A totally different approach to random tensors was found by Pierre Baldi and myself, which I will explain in detail.

Hashing

Speaker: 

Roman Vershynin

Institution: 

UCI

Time: 

Tuesday, February 19, 2019 - 11:00am to 11:50am

Location: 

306 RH

Hashing is a technique widely used in coding theory (an area of computer science) and in cryptography. Although hashing is an interesting mathematical object, it is surprisingly little known to the "mainstream" mathematicians. I will focus on one specific result on hasing, namely the Leftover Hash Lemma. We will state it as a result in extremal combinatorics, give a probabilistic proof of it, and relate it to another fundamental result in extremal combinatorics, the Sauer-Shelah Lemma.

The conjugate gradient algorithm on random matrices

Speaker: 

Tom Trogdon

Institution: 

UC Irvine

Time: 

Tuesday, April 30, 2019 - 11:00am

Location: 

RH 510M

Numerical analysis and random matrix theory have long been coupled, going (at least) back to the seminal work of Goldstine and von Neumann (1951) on the condition number of random matrices. The works of Trotter (1984) and Silverstein (1985) incorporate "numerical techniques" to assist in the analysis of random matrices. One can also consider the problem of computing distributions (e.g. Tracy-Widom) from random matrix theory. In this talk, I will discuss a different numerical analysis problem: using random matrices to analyze the runtime (or halting time) of numerical algorithms.  I will focus on recent results for the conjugate gradient algorithm including a proof that the halting time is almost deterministic.

Phase transitions and conic geometry

Speaker: 

Joel Tropp

Institution: 

Caltech

Time: 

Thursday, January 31, 2019 - 12:00pm to 12:50pm

Host: 

Location: 

RH 340N

A phase transition is a sharp change in the behavior of a mathematical model as one of its parameters varies. This talk describes a striking phase transition that takes place in conic geometry. First, we will explain how to assign a notion of "dimension" to a convex cone. Then we will use this notion of "dimension" to see that two randomly oriented convex cones share a ray with probability close to zero or close to one. This fact has implications for many questions in signal processing. In particular, it yields a complete solution of the "compressed sensing" problem about when we can recover a sparse signal from random measurements. Based on joint works with Dennis Amelunxen, Martin Lotz, Mike McCoy, and Samet Oymak.

Dynamic embedding of motifs into networks

Speaker: 

Hanbaek Lyu

Institution: 

UCLA

Time: 

Tuesday, December 11, 2018 - 11:30am to 12:20pm

Host: 

Location: 

RH 306

We study various structural information of a large network $G$ by randomly embedding a small motif $F$ of choice. We propose two randomized algorithms to effectively sample such a random embedding by a Markov chain Monte Carlo method. Time averages of various functionals of these chains give structural information on $G$ via conditional homomorphism densities and density profiles of its filtration. We show such observables are stable with respect to various notions of network distance. Our efficient sampling algorithm and stability inequalities allow us to use our techniques for hypothesis testing on and hierarchical clustering of large networks. We demonstrate this by analyzing both synthetic and real world network data.  Join with Facundo Memoli and David Sivakoff.

Random matrix perturbations

Speaker: 

Sean O'Rourke

Institution: 

University of Colorado, Boulder

Time: 

Tuesday, May 14, 2019 - 11:00am to 11:50am

Host: 

Location: 

RH 510M

Computing the eigenvalues and eigenvectors of a large matrix is a basic task in high dimensional data analysis with many applications in computer science and statistics. In practice, however, data is often perturbed by noise. A natural question is the following: How much does a small perturbation to the matrix change the eigenvalues and eigenvectors? In this talk, I will consider the case where the perturbation is random. I will discuss perturbation results for the eigenvalues and eigenvectors as well as for the singular values and singular vectors.  This talk is based on joint work with Van Vu, Ke Wang, and Philip Matchett Wood.

Several open problems on the Hamming cube II.

Speaker: 

Paata Ivanisvili

Institution: 

UCI

Time: 

Tuesday, February 5, 2019 - 11:00am to 12:00pm

Host: 

Location: 

306 RH

The Hamming cube of dimension n  consists of vectors of length n with coordinates +1 or -1.  Real-valued functions on the Hamming cube equipped with uniform counting measure can be expressed as Fourier--Walsh series, i.e., multivariate polynomials of n variables +1 or -1. The degree of the function is called the corresponding degree of its multivariate polynomial representation.  Pick a function whose Lp norm is 1. How large the Lp norm of the discrete (graph) Laplacian of the function can be if its degree is at most d, i.e., it lives on ``low frequencies''? Or how small it can be if the function lives on high frequencies, i.e., say all low degree terms (lower than d) are zero? I will sketch some proofs based on joint works (some in progress) with Alexandros Eskenazis.

Pages

Subscribe to RSS - Combinatorics and Probability