We consider a random walk in a time space ergodic balanced
environment and prove a functional limit theorem under suitable
moment conditions on the law of the environment.
The invertibility of random matrices with iid entries has been the object of intense study over the past decade, due in part to its role in proving the circular law, as well as its importance in numerical analysis (smoothed analysis). In this talk we review recent progress in our understanding of invertibility for some non-iid models: adjacency matrices of sparse random regular digraphs, and random matrices with inhomogeneous variance profile. We will also discuss estimates for the number of singular values in short intervals. Graph regularity properties play a key role in both problems. Based in part on joint works with Walid Hachem, Jamal Najim, David Renfrew, Anirban Basak and Ofer Zeitouni.
A major difficulty in accurately simulating turbulent flows is the problem of determining the initial state of the flow. For example, weather prediction models typically require the present state of the weather as input. However, the state of the weather is only measured at certain points, such as at the locations of weather stations or weather satellites. Data assimilation eliminates the need for complete knowledge of the initial state. It incorporates incoming data into the equations, driving the simulation to the correct solution. The objective of this talk is to discuss innovative computational and mathematical methods to test, improve, and extend a promising new class of algorithms for data assimilation in turbulent flows.
This seminar marks the start of the Fall 2017 TA training program. After outlining the main features of the training, we will discuss the profile of the students we are serving at UCI, and move on to a discussion of mathematical mindsets.
Binary, or one-bit, representations of data arise naturally in many applications, and are appealing in both hardware implementations and algorithm design. In this talk, we provide a brief background to sparsity and 1-bit measurements, and then present new results on the problem of data classification from binary data that proposes a stochastic framework with low computation and resource costs. We illustrate the utility of the proposed approach through stylized and realistic numerical experiments, provide a theoretical analysis for a simple case, and discuss future directions.
Many problems of contemporary interest in signal processing and machine learning involve highly non-convex optimization problems. While nonconvex problems are known to be intractable in general, simple local search heuristics such as (stochastic) gradient descent are often surprisingly effective at finding global optima on real or randomly generated data. In this talk I will discuss some results explaining the success of these heuristics by connecting convergence of nonconvex optimization algorithms to supremum of certain stochastic processes. I will focus on two problems.
The first problem, concerns the recovery of a structured signal from under-sampled random quadratic measurements. I will show that projected gradient descent on a natural nonconvex formulation finds globally optimal solutions with a near minimal number of samples, breaking through local sample complexity barriers that have emerged in recent literature. I will also discuss how these new mathematical developments pave the way for a new generation of data-driven phaseless imaging systems that can utilize prior information to significantly reduce acquisition time and enhance image reconstruction, enabling nano-scale imaging at unprecedented speeds and resolutions. The second problem is about learning the optimal weights of the shallowest of neural networks consisting of a single Rectified Linear Unit (ReLU). I will discuss this problem in the high-dimensional regime where the number of observations are fewer than the ReLU weights. I will show that projected gradient descent on a natural least-squares objective, when initialization at 0, converges at a linear rate to globally optimal weights with a number of samples that is optimal up to numerical constants.