Eigenvalues of multivariate variance components estimates

Speaker: 

Zhou Fan

Institution: 

Stanford

Time: 

Friday, November 3, 2017 - 10:00am

Location: 

NS 1201

Variance components (a.k.a. random/mixed effects) models are commonly used to determine genetic variance-covariance matrices of quantitative phenotypic traits in a population. The eigenvalue spectra of such matrices describe the evolutionary response to selection, but may be difficult to estimate from limited samples when the number of traits is large. In this talk, I will discuss the eigenvalues of classical MANOVA estimators of these matrices, including a characterization of the bulk empirical eigenvalue distribution, Tracy-Widom fluctuations at the spectral edges under a ``sphericity'' null hypothesis, the behavior of outlier eigenvalues under spiked alternatives, and a statistical procedure for estimating true population spike eigenvalues from the sample. These results are established using tools of random matrix theory and free probability.

On loop soups, gradient fields, and critical phenomena

Speaker: 

Pierre-François Rodriquez

Institution: 

UCLA

Time: 

Tuesday, October 24, 2017 - 11:00am

Location: 

RH 306

We will discuss recent developments relating Poissonian soups of random walks à la Lawler-Werner, Le Jan and Sznitman, with Gaussian free fields. We will show how the underlying correspondence, which can be traced back to early work of Symanzik in constructive field theory, can be used to effectively study phase transitions in such systems.

Cleaning large correlation matrices: Eigenvector overlaps, rotationally invariant estimators and financial applications

Speaker: 

Marc Potters

Institution: 

Capital Fund Management (Paris) and UCLA Applied Mathematics

Time: 

Tuesday, October 17, 2017 - 11:00am

Location: 

RH 306

Modern financial portfolio construction uses mean-variance optimisation that requiers the knowledge of a very large covariance matrix. Replacing the unknown covariance matrix by the sample covariance matrix (SCM) leads to disastrous out-of-sample results that can be explained by properties of large SCM understood since Marcenko and Pastur.  A better estimate of the true covariance can be built by studying the eigenvectors of SCM via the average matrix resolvent. This object can be computed using a matrix generalisation of Voiculescu’s addition and multiplication of free matrices.  The original result of Ledoit and Peche on SCM can be generalise to estimate any rotationally invariant matrix corrupted by additive or multiplicative noise. Note that the level of rigor of the seminar will be that of statistical physics.

This is a joint probability/applied math seminar.

Variational principles for discrete maps

Speaker: 

Martin Tassy

Institution: 

UCLA

Time: 

Tuesday, February 28, 2017 - 11:00am to 11:50am

Host: 

Location: 

RH 306

Previous works have shown that arctic circle phenomenons and limiting behaviors of some integrable discrete systems can be explained by a variational principle. In this talk we will present the first results of the same type for a non-integrable discrete system: graph homomorphisms form Z^d to a regular tree. We will also explain how the technique used could be applied to other non-integrable models.

How do we parametrize a random fractal curve?

Speaker: 

Greg Lawler

Institution: 

University of Chicago

Time: 

Friday, February 24, 2017 - 2:00am to 3:00am

Host: 

Location: 

NS2 1201

For a smooth curve, the natural paraemtrization

is parametrization by arc length.  What is the analogue

for a random curve of fractal dimension d?  Typically,

such curves have Hausdorff dmeasure 0.  It turns out

that a different quantity, Minkowski content, is the

right thing.   

 

I will discuss results of this type for the Schramm-Loewner

evolution --- both how to prove the content is well-defined

(work with M. Rezaei) and how it relates to the scaling

limit of the loop-erased random walk (work with F. Viklund

and C. Benes).

Normal approximation for recovery of structured unknowns in high dimension: Steining the Steiner formula.

Speaker: 

Larry Goldstein

Institution: 

USC

Time: 

Tuesday, February 7, 2017 - 11:00pm to 11:50pm

Host: 

Location: 

306 RH

Normal approximation for recovery of structured unknowns in high dimension: Steining the Steiner formula Larry Goldstein, University of Southern California Abstract Intrinsic volumes of convex sets are natural geometric quantities that also play important roles in applications. In particular, the discrete probability distribution L(VC) given by the sequence v0,...,vd of conic intrinsic volumes of a closed convex cone C in Rd summarizes key information about the success of convex programs used to solve for sparse vectors, and other structured unknowns such as low rank matrices, in high dimensional regularized inverse problems. The concentration of VC implies the existence of phase transitions for the probability of recovery of the unknown in the number of observations. Additional information about the probability of recovery success is provided by a normal approximation for VC. Such central limit theorems can be shown by first considering the squared length GC of the projection of a Gaussian vector on the cone C. Applying a second order Poincar´e inequality, proved using Stein’s method, then produces a non-asymptotic total variation bound to the normal for L(GC). A conic version of the classical Steiner formula in convex geometry translates finite sample bounds and a normal limit for GC to that for VC. Joint with Ivan Nourdin and Giovanni Peccati. http://arxiv.org/abs/1411.6265

Phase transitions in the 1-2 model

Speaker: 

Zhongyang Li

Institution: 

University of Connecticut

Time: 

Monday, November 21, 2016 - 12:00pm to 12:50pm

Host: 

Location: 

340P

 

 

A configuration in the 1-2 model is a subgraph of the hexagonal lattice, in which each vertex is incident to 1 or 2 edges. By assigning weights to configurations at each vertex, we can define a family of probability measures on the space of these configurations, such that the probability of a configuration is proportional to the product of weights of configurations at vertices.

 

We study the phase transition of the model by investigating the probability measures with varying weights. We explicitly identify the critical weights, in the sense that the edge-edge correlation decays to 0 exponentially in the subcritical case, and converges to a non-zero constant in the supercritical case, under the limit measure obtained from torus approximation. These results are obtained by a novel measure-preserving correspondence between configurations in the 1-2 model and perfect matchings on a decorated graph, which appears to be a more efficient way to solve the model, compared to the holographic algorithm used by computer scientists to study the model. 

 

When the weights are uniform, we prove a weak mixing property for the finite-volume measures - this implies the uniqueness of the infinite-volume measure and the fast mixing of a Markov chain Monte Carlo sampling. The major difficulty here is the absence of stochastic monotonicity.

Poisson approximation of combinatorial assemblies with low rank

Speaker: 

Stephen DeSalvo

Institution: 

UCLA

Time: 

Tuesday, November 15, 2016 - 11:00pm to 11:50pm

Host: 

Location: 

RH 306

We present a general framework for approximating combinatorial assemblies when both the size $n$ and the number of components $k$ is specified.  The approach is an extension of the usual saddle point approximation, and we demonstrate near-universal behavior when the rank $r := n-k$ is small relative to $n$ (hence the name `low rank’).  

 

In particular, for $\ell = 1, 2, \ldots$, when $r \asymp n^\alpha$, for $\alpha \in \left(\frac{\ell}{\ell+1}, \frac{\ell+1}{\ell+2}\right)$, the size~$L_1$ of the largest component converges in probability to $\ell+2$.  When $r \sim t\, n^{\ell/(\ell+1)}$ for any $t>0$ and any positive integer $\ell$, $\P(L_1 \in \{\ell+1, \ell+2\}) \to 1$.  We also obtain as a corollary bounds on the number of such combinatorial assemblies, which in the special case of set partitions fills in a countable number of gaps in the asymptotic analysis of Louchard for Stirling numbers of the second kind. 

 

This is joint work with Richard Arratia.

Random perturbations of non-normal matrices

Speaker: 

Elliot Paquette

Institution: 

Ohio State University

Time: 

Tuesday, January 24, 2017 - 11:00am

Location: 

RH 306

Suppose one wants to calculate the eigenvalues of a large, non-normal matrix.  For example, consider the matrix which is 0 in most places except above the diagonal, where it is 1.  The eigenvalues of this matrix are all 0.  Similarly, if one conjugates this matrix, in exact arithmetic one would get all eigenvalues equal to 0.  However, when one makes floating point errors, the eigenvalues of this matrix are dramatically different.  One can model these errors as performing a small, random perturbation to the matrix.  And, far from being random, the eigenvalues of this perturbed matrix nearly exactly equidistribute on the unit circle.  This talk will give a probabilistic explanation of why this happens and discuss the general question: how does one predict the eigenvalues of a large, non-normal, randomly perturbed matrix?

Pages

Subscribe to RSS - Combinatorics and Probability