On Geometric Quantization of Poisson Manifolds

Speaker: 

Jonathan Weitsman

Institution: 

Northeastern University

Time: 

Thursday, March 2, 2017 - 4:00pm to 5:00pm

Host: 

Location: 

RH 306

Geometric Quantization is a program of assigning to
classical mechanical systems (symplectic manifolds and the associated
Poisson algebras of C-infinity functions) their quantizations ---
algebras of operators on Hilbert spaces.  Geometric Quantization has
had many applications in Mathematics and Physics.   Nevertheless the
main proposition at the heart of the theory, invariance of
polarization, though verified in many examples, is still not proved in
any generality.  This causes numerous conceptual difficulties:  For
example, it makes it very difficult to understand the functoriality of
theory.

Nevertheless, during the past 20 years, powerful topological and
geometric techniques have clarified at least some of the features of
the program.

In 1995 Kontsevich showed that formal deformation quantization can be
extended to Poisson manifolds.  This naturally raises the question as
to what one can say about Geometric Quantization in this context.  In
recent work with Victor Guillemin and Eva Miranda, we explored this
question in the context of Poisson manifolds which are "not too far"
from being symplectic - the so called b-symplectic or b-Poisson
manifolds - in the presence of an Abelian symmetry group.

In this talk we review Geometric Quantization in various contexts, and
discuss these developments, which end with a surprise.

 

Three principles of data science: predictability, stability, and computability

Speaker: 

Bin Yu

Institution: 

UC Berkeley

Time: 

Friday, February 3, 2017 - 2:00pm to 3:00pm

Host: 

Location: 

NS2, 1201

In this talk, I'd like to discuss the intertwining importance and connections of three principles of data science in the title in data-driven decisions. The ultimate importance of prediction lies in the fact that future holds the unique and possibly the only purpose of all human activities, in business, education, research, and government alike.
Making prediction as its central task and embracing computation as its core, machine learning has enabled wide-ranging data-driven successes. Prediction is a useful way to check with reality. Good prediction implicitly assumes stability between past and future. Stability (relative to data and model perturbations) is also a minimum requirement for interpretability and reproducibility of data driven results. It is closely related to uncertainty assessment. Obviously, both prediction and stability principles can not be employed without feasible computational algorithms, hence the importance of computability. The three principles will be demonstrated through analytical connections, and in the context of two on-going neuroscience projects, for which "data wisdom" is also indispensable. Specifically, the first project interprets a predictive model used for reconstruction
of movies from fMRI brain signals; the second project employs deep learning networks (CNNs) to understand pattern selectivities of neurons in the difficult visual cortex V4.

Blending Mathematical Models and Data

Speaker: 

Andrew Stuart

Institution: 

Caltech

Time: 

Thursday, February 9, 2017 - 4:00pm to 5:00pm

Host: 

Location: 

RH 306

A central research challenge for the
mathematical sciences in the $21^{st}$ century is
the development of principled methodologies for the
seamless integration of (often vast) data sets
with (often sophisticated) mathematical models.
Such data sets are becoming routinely available
in almost all areas of engineering, science and technology,
whilst mathematical models describing phenomena of
interest are often built on decades, or even centuries, of
human knowledge creation. Ignoring either the data or the models
is clearly unwise and so the issue of combining them
is of paramount importance. In this talk we will give
a historical perspective on the subject, highlight some of
the current research directions that it leads to, and
describe some of the underlying mathematical frameworks
begin deployed and developed. The ideas will be illustrated
by problems arising in the geophysical, biomedical and
social sciences.

More tales of our fathers

Speaker: 

Barry Simon

Institution: 

Caltech

Time: 

Thursday, February 15, 2018 - 4:00pm to 6:00pm

Host: 

Location: 

NS II 1201

This is not a mathematics talk but it is a talk for mathematicians. Too often, we
think of historical mathematicians as only names assigned to theorems. With
vignettes and anecdotes, I'll convince you they were also human beings and that, as
the Chinese say, "May you live in interesting times" really is a curse. More tales
following up on the talk I gave at Irvine in May, 2014. It is not assumed listeners
heard that earlier talk.

Self-Avoiding random motion

Speaker: 

Greg Lawler

Institution: 

University of Chicago

Time: 

Thursday, February 23, 2017 - 4:00pm to 5:00pm

Host: 

Location: 

RH 306

The self-avoiding walk (SAW) is a model for polymers that assigns equal probability to all paths that do not return to places they have already been. The lattice version of this problem, while elementary to define, has proved to be notoriously difficult and is still open. It is initially more challenging to construct a continuous limit of the lattice model which is a random fractal. However, in two dimensions this has been done and the continuous model (Schram-Loewner evolution) can be analyzed rigorously and  used to understand the nonrigorous predictions about SAWs.  I will survey some results in this area and then discuss some recent work on this ``continuous SAW''.

Pages

Subscribe to RSS - Colloquium