Research Interests

I have just completed a book (coauthored with Sheehan Olver) titled Riemann-Hilbert Problems, Their Numerical Solution and the Computation of Nonlinear Special Functions published by SIAM. The front matter, back matter and first chapter can be found here. The book can be found here.

My interests include the interaction between probability/random matrix theory and numerical analysis, Riemann-Hilbert problems and spectral methods and the analysis of linear and nonlinear boundary-value problems.

Two-Component Universality

Consider the solution of $Ax = b$ with the conjugate gradient method where $A$ and $b$ are random and $n$ is the dimension of the system. The algorithm is iterative and at iteration $k$ an approximate solution, $w_k$ is found. The residual is $r_k = b-Aw_k$. The algorithm is typically halted when the 2-norm of the residual falls below a specified tolerance: $$\|r_k\|_{\ell^2} < \epsilon.$$ The number of iterations required to reach this tolerance is a random variable called the halting time $T$ and we examine it with Monte Carlo simulations. In particular, after $N$ simulations, we normalize the data to mean zero and variance one (the two components) and look at what remains: the fluctuations. The fluctuations appear to be universal, independent of the distribution (within a class) of $A$ for $n$ sufficiently large.

A demonstration of two-component universality in the conjugate gradient algorithm for $N=16,000$ and $n=100$ and $500$. Each image shows the fluctuations for the halting time $T$ when $A$ is chosen from two distinct distributions.

Motivated by these computations we (with P. Deift and G. Menon) also proved the following limit theorem:


Theorem: Let $X$ be an $n \times m$ array of complex iid standard Gaussian random variables and define $A = XX^*$. If $m = n + \alpha$ where $\alpha = \lfloor \sqrt{ 4 c n} \rfloor$ then for all $t \in \mathbb R$ $$\lim_{n \rightarrow \infty} \mathbb P\left( \frac{\kappa - 4 n c^{-1}}{4 c^{-4/3} n^{2/3}} \leq t \right) = F_2(t), ~~~ \kappa = \frac{\lambda_{\max}}{\lambda_{\min}},$$ where $F_2(t)$ is the distribution function for the Tracy-Widom ($\beta = 2$) distribution.

Gibbs Phenomenon in PDEs

Consider the solution of the following initial value problem (IVP): $$i q_t = \omega(-i \partial_x) q, ~~ (x,t) \in \mathbb R \times (0,T),$$ $$q(x,0) = q_o(x), ~~~ \omega(k) = \omega_n k^n + \mathcal O(k^{n-1}) ~~ \text{is a polynomial.}$$ Provided the imaginary part of $\omega(k), k \in \mathbb R$ is bounded above this IVP is well-posed for $q_o \in L^2(\mathbb R)$. Now, consider the case where $q_o$ is piecewise continuous with a (piecewise) $L^2(\mathbb R)$ derivative then I (with G. Biondini) found a short-time expansion of the solution as $t \downarrow 0$. Furthermore, the expansion is computable numerically with high accuracy.

To illustrate the results let $$ q_o(x) = s(x)= \begin{cases} 1, & \mbox{if } |x| \leq 1,\\ 0, & \mbox{if } |x| > 1. \end{cases}$$ The behavior of the solution as $t \downarrow 0$ is seen in the animation below to mirror that of Gibbs phenomenon in Fourier series approximations.

Left/Top: $q(x,t)$ with $q_o(x) = s(x)$, $\omega(k) = -k^3$ as $t \downarrow 0$. Right/Bottom: A Fourier series partial sum approximation $S_n[s](x)$ of $s(x)$ as $n \rightarrow \infty$.


The following is classical:

Fact: Define the Wilbraham-Gibbs constant $$\mathfrak g = \int_{0}^\pi \frac{\sin y}{\pi y} d y - \frac{1}{2}.$$ Then for any $\delta > 0$ $$\lim_{n \rightarrow \infty} \sup_{|x\pm 1| \leq \delta} S_n[s](x) = 1 + \mathfrak g, \quad \lim_{n \rightarrow \infty} \inf_{|x\pm 1| \leq \delta} S_n[s](x) = -\mathfrak g.$$
A corollary of our work is:

Theorem: Let $q_n(x,t)$ be the solution of $i q_t = (- i \partial_x)^n q$ with $q(x,0) = s(x)$. Then for any $\delta > 0$ $$\lim_{n \rightarrow \infty} \lim_{t \downarrow 0} \sup_{|x\pm 1| \leq \delta} q_n(x,t) = 1 + \mathfrak g, \quad \lim_{n \rightarrow \infty} \lim_{t \downarrow 0} \inf_{|x\pm 1| \leq \delta} q_n(x,t) = -\mathfrak g,$$ where $\inf$ and $\sup$ act on the real and imaginary parts separately.

Numerical Nonlinear Steepest Descent

The effective computation of the inverse scattering transform, orthogonal polynomials with exponential weights, the Painlevé transcendents and other nonlinear special functions is accomplished by combining the Deift and Zhou method of nonlinear steepest descent with a method for the numerical solution of Riemann-Hilbert problems. Under mild assumptions on the numerical method, this combination produces an approximation that is uniformly and spectrally convergent over large parameter ranges. Below we plot a solution of the Korteweg-de Vries (KdV) equation while simultaneously displaying the contours of the Riemann--Hilbert problem that is being solved to compute the value of the solution q at each point.

Finite-Genus KdV

Riemann-Hilbert methods are also used to derive representations of the finite-genus solutions of the KdV equation. This presents an alternative to the now-classical theta function approach. The Riemann-Hilbert problem is derived by representing scalar-valued analytic functions on a hyperelliptic Riemann surface by vector-valued functions on a cut version of the complex plane. These vector-valued functions satisfy a Riemann--Hilbert problem which can be solved numerically. An animation of a genus two solution of the KdV equation is found here:

A genus two solution of the KdV equation

Instability of Spatial Solitons

Instabilities of spatial solitons in a (2+1)-dimensional nonlinear Schrödinger equation have been well-studied since the work of Zakharov and Rubenchik in 1979. Through accurate computations using Hill's method we closely examine the growth rate of instabilities. Hill's method allows us to not only capture the maximal growth rate in the so-called snake and oscillatory snake regions but also the growth rate for the sub-dominant oscillatory neck instability. These rates are experimentally demonstrated by the group of M. Haelterman using a 2D waveguide array. See below for the computations of these instabilities. We animate the evolution of the equilibrium solution plus the first-order correction obtained using Hill's method.

Evolution of the neck instability
Evolution of the oscillatory snake instability