Speaker: 

Wei Zhu

Institution: 

University of Massachusetts Amherst

Time: 

Monday, October 16, 2023 - 4:00pm to 5:00pm

Host: 

Location: 

Zoom (https://uci.zoom.us/j/97796361534)

Symmetry is ubiquitous in machine learning (ML) and scientific computing, with compelling implications for model development. Equivariant neural networks, specifically designed to preserve group symmetry, have shown marked improvements in learning tasks with inherent group structures, especially when faced with limited data. This talk will explore our recent and ongoing works in this field, divided into three key parts:

 

Part One: Deformation-Robust Symmetry Preservation
I will outline a general framework for creating deformation-robust, symmetry-preserving ML models. Central to this methodology is the spectral regularization of convolutional filters, a technique that ensures symmetry is "approximately" preserved, even if the symmetry transformation becomes "contaminated" by unavoidable nuisance data deformations.

 

Part Two: Structure-Preserving Generative Models
The second part will explain how structural information, including but not limited to group symmetry, can be incorporated into generative models. By developing variational representations of probability divergence with embedded structures, I will share both theoretical insights and empirical findings, emphasizing the considerable benefits of systematically employing structural priors within generative models.

 

Part Three: Implicit Bias of Equivariant Neural Networks
In the concluding segment, I will concentrate on the training dynamics and implicit bias of equivariant neural networks. By precisely identifying the solutions to which equivariant neural networks converge when trained under gradient flow, I will clarify why these models excel over their non-equivariant counterparts in group symmetric learning tasks.