Mathematical Foundations of Artificial Intelligence
Our group investigates mathematical foundations of artificial intelligence with the goals of uncovering the mysterious success and failures of complex machine learning models and developing theories, methods and software that expand the frontiers of AI.
We particularly focus on two interrelated research topics that fall into this broad template. The first one concerns how to accommodate symmetries explicitly in machine learning models. The second one concentrates on high-dimensional and large-scale optimization problems and the deployment of algorithms in modern computing systems. Our research topics include:
Equivariance of neural networks
Neural differential equations
Learning on manifolds
Compressive sensing
Implicit regularization and theory of deep learning