Plenary speakers
Aaditya Ramdas
Associate Professor
Carnegie Mellon University
A complete generalization of Kelly betting
AbstractIn the 1950s, John Kelly (working at Bell Labs like Claude Shannon) fundamentally connected gambling on coin tosses with Shannon's information theory, and this was soon extended by Leo Breiman (1963) to more general settings. In an excellent 1999 Yale PhD thesis, Jonathan Li defined and studied a concept called the reverse information projection (RIPr), which is a Kullback-Leibler projection of a given probability measure onto a given set of probability measures. Grunwald et al. (2024) showed that the RIPr characterizes the log-optimal bet/e-variable of a point alternative hypothesis against a composite null hypothesis, albeit under several assumptions (convexity, absolute continuity, etc). In this talk, we will show how to fully and completely generalize the theory underlying Kelly betting and the RIPr, showing that the RIPr is always well defined, without *any* assumptions. Further, a strong duality result identifies it as the dual to an optimal bet/e-variable called the numeraire, which is unique and also always exists without assumptions. This fully generalizes classical Kelly betting to arbitrary composite nulls; the same assumptionless strong duality also holds for Renyi/Hellinger projections (replacing the logarithmic utility by power utilities).
The talk will not assume any prior knowledge on these topics. This is joint work with Martin Larsson and Johannes Ruf, and appeared in the Annals of Statistics (2025).
BiographyAaditya Ramdas is a tenured Associate Professor in the Department of Statistics and Data Science and the Machine Learning Department at Carnegie Mellon University. He was a postdoc at UC Berkeley (2015–2018) mentored by Michael Jordan and Martin Wainwright, and obtained his PhD at CMU (2010–2015) under Aarti Singh and Larry Wasserman, receiving the Umesh K. Gavaskar Memorial Thesis Award. His work has been recognized by the Presidential Early Career Award (PECASE), NSF CAREER award, and a Sloan Fellowship in Mathematics. His research focuses on algorithms with theoretical guarantees in mathematical statistics and learning, specifically post-selection inference, game-theoretic statistics, and predictive uncertainty quantification.
Melanie Weber
Assistant Professor
Harvard University
Feature Geometry guides Model Design in Deep Learning
AbstractThe geometry of learned features can provide crucial insights on model design in deep learning. In this talk, we discuss two recent lines of work that reveal how the evolution of learned feature geometry during training both informs and is informed by architecture choices. First, we explore how deep neural networks transform the input data manifold by tracking its evolving geometry through discrete approximations via geometric graphs that encode local similarity structure. Analyzing the graphs’ geometry reveals that as networks train, the models’ nonlinearities drive geometric transformations akin to a discrete Ricci flow. This perspective yields practical insights for early stopping and network depth selection informed by data geometry. The second line of work concerns learning under symmetry, including permutation symmetry in graphs or translation symmetry in images. Group-convolutional architectures can encode such structure as inductive biases, which can enhance model efficiency. However, with increased depth, conventional group convolutions can suffer from instabilities that manifest as loss of feature diversity. A notable example is oversmoothing in graph neural networks. We discuss unitary group convolutions, which provably stabilize feature evolution across layers, enabling the construction of deeper networks that are stable during training.
BiographyMelanie Weber is an Assistant Professor of Applied Mathematics and Computer Science at Harvard University, where she leads the Geometric Machine Learning Group. Her research utilizes geometric structure in data to design efficient machine learning and optimization methods with provable guarantees. Previously, she was a Hooke Research Fellow at the University of Oxford and a Research Fellow at the Simons Institute in Berkeley. She received her PhD from Princeton University in 2021 under the supervision of Charles Fefferman. She has also held visiting positions at MIT and the Max Planck Institute, and interned at Facebook, Google, and Microsoft. Her work is supported by the NSF, DARPA, the Sloan Foundation, and the Aramont Foundation.
Preston Culbertson
Assistant Professor
Cornell University
Robust Robot Behavior Through Online Optimization
AbstractMuch recent progress in robotics has been driven by learned policies trained offline in simulation or from large datasets, enabling impressive demonstrations in locomotion and manipulation. Despite this success, a key challenge remains: robots must operate in the “long tail” of environments, where objectives, dynamics, and constraints differ from those seen in training, often causing fixed policies to break down. In this talk, online optimization serves as a unifying approach for robust robot behavior: actions are adapted at runtime by solving optimization problems whose costs and constraints are specified at execution time, with learning used to provide models, priors, or uncertainty estimates that make this adaptation tractable. I will present real-world examples from humanoid locomotion, whole-body manipulation, and in-hand dexterous manipulation that demonstrate reliable, risk-sensitive behavior in contact-rich and difficult-to-model environments.
BiographyPreston Culbertson is an Assistant Professor of Computer Science at Cornell University. Prior to joining Cornell, he was a research scientist at the Robotics and AI Institute and a postdoctoral scholar at the California Institute of Technology. He received his PhD and MS from Stanford University and his BS from the Georgia Institute of Technology, all in mechanical engineering. His research draws on optimization, control theory, and machine learning to develop robotic systems that remain reliable when models, sensing, or hardware are imperfect.
Anurag Anshu
Assistant Professor
Harvard University
The theory of learnability of local Hamiltonians from Gibbs states
AbstractQuantum many-body systems in thermal equilibrium (aka, the Gibbs states) describe a wide range of physical phenomena and appear naturally in many areas of physics. An important challenge is to learn the underlying Hamiltonian that governs such systems using information obtained from these quantum states. This problem has become a central topic in quantum learning theory, as it connects theoretical questions with experimentally relevant tasks. In this talk, I will give an accessible introduction to the problem and survey recent progress on efficient learning algorithms. I will also highlight several open questions that offer exciting directions for future research.
BiographyAnurag Anshu is an Assistant Professor of Computer Science at Harvard University. He specializes in quantum complexity theory, quantum many-body systems, and quantum learning theory. Anurag previously served as a postdoctoral researcher at the Simons Institute and UC Berkeley (2020–2021), and at the Institute for Quantum Computing and the Perimeter Institute (2018–2020). He received his PhD from the Centre for Quantum Technologies at the National University of Singapore under the supervision of Rahul Jain, earning the Dean's Graduate Research Excellence Award, and holds a Bachelor's degree in Computer Science from IIT Guwahati.
Massachusetts Institute of Technology