Working Group in Information Theory is a self-educational project in the department. Techniques based on information theory have become essential in high-dimensional probability, theoretical computer science and statistical learning theory. On the other hand, information theory is not taught systematically. The goal of this group is to close this gap.
Working Group in Information Theory is a self-educational project in the department. Techniques based on information theory have become essential in high-dimensional probability, theoretical computer science and statistical learning theory. On the other hand, information theory is not taught systematically. The goal of this group is to close this gap.
Working Group in Information Theory is a self-educational project in the department. Techniques based on information theory have become essential in high-dimensional probability, theoretical computer science and statistical learning theory. On the other hand, information theory is not taught systematically. The goal of this group is to close this gap.
Working Group in Information Theory is a new self-educational project in the department. Techniques based on information theory have become essential in high-dimensional probability, theoretical computer science and statistical learning theory. On the other hand, information theory is not taught systematically. The goal of this group is to close this gap. We will meet weekly and take turns to present the materital from the lecture notes of Wu and Polyanski:
We will keep going over Tikhomirov’s argument on the asymptotic singularity of random Bernoulli matrices, found here: https://arxiv.org/pdf/1812.09016.pdf
In contrast to random matrix theory, the theory of random tensors is poorly understood. This is surprising given the popularity of tensor methods in modern algorithmic applications. I will prove that tensor powers of independent random vectors are well conditioned with high probability. The argument uses several tools of high-dimensional probability as well as the Restricted Invertibility Principle of Bourgain and Tzafriri.
We prove the “net theorem” discussed in my previous talk in the Probability seminar. The “lite version” of the theorem states the existence of a net around the sphere of cardinality 100^n, such that for every random matrix A with independent columns, with probability 1-5^{-n}, the values of |Ax| on the sphere can be compared to its values on the net. The error in this comparison is optimal, and the formulation of the theorem is scale-invariant. We emphasize that the only assumption required for this is the independence of columns. The proof consists of four steps, and shall be outlined.
Stochastic localization is a powerful probabilistic tool invented recently by Ronen Eldan. It has found several applications to challenging problems in probability and geometry, see e.g. https://arxiv.org/abs/1612.01507