Skip to navigation Skip to content

Seminars

The Mathematics Department holds regular seminars on a variety of topics. Please see below for further details.

Seminars

Seminar Meeting Details Title & Abstract
Geometry and Topology Seminar
event
-
Geometry & Topology Seminar
Speaker: Mason Kamb (Stanford)
Geometry and Topology Seminar
event
-
Geometry & Topology Seminar
Speaker: Ye He (Georgia Tech)
Data Seminar
event
-
place
MSB 110
Lagrangian Dual Sections

This talk discusses joint work with Venkat Chandrasekaran, Jose Israel Rodriguez, and Kevin Shu, where we initiate the study of Lagrangian dual sections. This theory gives rise to sufficient conditions for the "hidden convexity" of certain nonconvex optimization problems. Notable examples include spectral inverse problems and certain unbalanced Procrustes problems. As an additional bonus, when the constraint set is a compact Riemannian manifold, the Lagrangian formulation allows us to solve these problems using a numerical continuation algorithm based on Riemannian gradient descent.

Speaker: Timothy Duff
Differential Equations Seminar
event
-
place
MSB 111
group
TBA

TBA

Speaker: Kiril Datchev (Purdue)
Geometry and Topology Seminar
event
-
Geometry & Topology Seminar
Speaker: Binxu Wang (Harvard)
Data Seminar
event
-
place
MSB 110
Interpretable, Explainable, and Adversarial AI: Data Science Buzzwords and You (Mathematicians)

Many state-of-the-art methods in machine learning are black boxes which do not allow humans to understand how decisions are made. In a number of applications, like medicine and atmospheric science, researchers do not trust such black boxes. Explainable AI can be thought of as attempts to open the black box of neural networks, while interpretable AI focuses on creating clear boxes. Adversarial attacks are small perturbations of data that cause a neural network to misclassify the data or act in other undesirable ways. Such attacks are potentially very dangerous when applied to technology like self-driving cars. The goal of this talk is to introduce mathematicians to problems they can attack using their favorite mathematical tools. The mathematical structure of transformers, the powerhouse behind large language models like ChatGPT, will also be explained.

Speaker: Emily J King (Colorado State)
Geometry and Topology Seminar
event
-
place
Zoom
group
Harnessing Low-Dimensionality for Generalizable and Trustworthy Generative AI

Abstract: Generative AI has rapidly transformed machine learning, with diffusion and autoregressive models achieving unprecedented performance across vision, language, and scientific discovery. Despite this success, our theoretical understanding still lags far behind practice: why do these models generalize so effectively from finite data in high dimensions? In this talk, I present a mathematical framework that shows that intrinsic low-dimensional structure is the key to understanding this phenomenon and provides a foundation for building more trustworthy generative AI. Through the lens of mixtures of low-rank Gaussian models, I show that learning high-dimensional distributions can be reduced to a canonical subspace clustering problem. This connection yields provable guarantees: the sample complexity scales with the intrinsic dimension of the data, rather than the ambient dimension, thereby breaking the curse of dimensionality for generalization. I will then turn to the role of representation learning in generalization, using two-layer denoising autoencoders as a tractable model to show that the optimal representations and weight structures differ fundamentally between the memorization and generalization regimes. These results offer a unified perspective on how generative models both learn meaningful structure in latent spaces and synthesize new data in high dimensions. We translate these theoretical insights into practical guidelines for controlled generation, ensuring model safety and privacy. Finally, we conclude by contrasting the generalization performance of diffusion and autoregressive models in the context of state prediction for stochastic dynamical systems. These findings inform new data assimilation methods and provide critical insights across many scientific applications, and establish a foundation for next-generation generative modeling.

Speaker Bio: Qing Qu is an Assistant Professor in EECS at the University of Michigan. He works at the intersection of the foundations of machine learning, numerical optimization, and signal/image processing, with a current focus on the theory of deep generative models and representation learning. Prior to joining Michigan in 2021, he was a Moore–Sloan Data Science Fellow at the Center for Data Science, New York University (2018–2020). He received his Ph.D. in Electrical Engineering from Columbia University in October 2018 and his B.Eng. in Electrical and Computer Engineering from Tsinghua University in July 2011. His work has been recognized with multiple honors, including the Best Student Paper Award at SPARS 2015, a Microsoft PhD Fellowship in Machine Learning (2016), the Best Paper Award at the NeurIPS Diffusion Models Workshop (2023), NSF CAREER Award (2022), Amazon Research Award (AWS AI, 2023), UM CHS Junior Faculty Award (2025), Google Research Scholar Award (2025), and the 1938E Award in Michigan Engineering (2026). He has led and delivered multiple tutorials at ICASSP, CPAL, CVPR, ICCV, and ICML. He was one of the founding organizers and Program Chair for the new Conference on Parsimony & Learning (CPAL), regularly serves as an Area Chair for NeurIPS, ICML, and ICLR, senior area chair for ICASSP’26, and is an Action Editor for TMLR.


 

Speaker: Qing Qu (University of Michigan)
Differential Equations Seminar
event
-
place
MSB 111
group
TBA

TBA

Speaker: Jeremey Marzuola (UNC)
Geometry and Topology Seminar
event
-
place
Zoom
Geometry & Topology Seminar
Speaker: Jakiw Pidstrigach
Data Seminar
event
-
place
MSB 110
Consistency-Aware Generalized Matrix Inverses with Applications

We discuss aspects of generalized matrix inverses from a "consistency-aware" perspective. We show that many standard tools in engineering and applied mathematics (e.g., the SVD) are commonly mis-applied in ways that undermine solution integrity. We then describe straightforward generalizations of these tools that remedy this situation.

Speaker: Jeffrey Uhlmann (MU)