Speaker:
Speaker Link:
Institution:
Time:
Location:
Multidimensional scaling (MDS) has a long history in statistics and underpins a broad class of unsupervised learning and spectral/nonlinear dimension reduction techniques. The objective of MDS is to extract meaningful information from relational data (e.g., distances between sensors, correlations between brain regions, or disagreement scores between individuals) by embedding these relationships into a Euclidean space. In practice, the observed relational information is often subject to measurement errors and/or corrupted by noise. However, the resulting embeddings are typically interpreted as exploratory visualizations without accounting for these variations. This talk presents recent work developing a principled statistical framework for MDS. We show that the classical MDS algorithm achieves minimax-optimal performance across a wide range of noise models and loss functions. Building on this, we develop a framework for constructing valid confidence sets for the embedded points obtained via MDS, enabling formal uncertainty quantification for geometric structure inferred from noisy relational data. These results provide a theoretical foundation for interpreting MDS embeddings, and extend naturally to a wide range of unsupervised learning techniques in modern data science.
