Speaker: 

Matteo Sesia

Institution: 

University of Southern California

Time: 

Monday, March 6, 2023 - 4:00pm to 5:00pm

Location: 

RH 306

Deep neural networks and other complex machine learning models are widely utilized in crucial decision-making processes across various domains, such as autonomous driving, medical diagnostics, and business. However, prediction errors in these contexts can have costly consequences, making it crucial to obtain reliable uncertainty estimates. Unfortunately, deep neural networks are typically not designed to understand and communicate uncertainty, and their predictions are easily prone to overconfidence. In this talk, recent advancements in the field of conformal inference will be presented, which aim to address the aforementioned limitations. Firstly, a flexible methodology for assessing the reliability of predictions made by any pre-trained classification model will be introduced, accounting for the potential heterogeneity in the uncertainty of individual predictions. Secondly, the talk will discuss how integrating conformal inference ideas into the algorithms used to train deep neural networks can lead to even more accurate and reliable uncertainty-aware predictions.