We are delighted to have these distinguished plenary speakers.
Yuejie Chi
Professor in the Department of Electrical and Computer Engineering
Carnegie Mellon University
Model-Free Reinforcement Learning: Non-asymptotic Statistical and Computational Guarantees
AbstractReinforcement learning (RL) is garnering significant interest in recent years. While classical analyses focus on asymptotic performance, understanding and enhancing the non-asymptotic sample and computational efficiencies of RL algorithms are in imminent need to cope with the ever-increasing problem dimensions. Answering to this call, this talk will discuss our recent progresses on two prevalent approaches to model-free RL: value-based RL and policy-based RL. For value-based RL, we will describe the tight sample complexity of Q-learning and the implication on its minimax optimality. For policy-based RL, we will discuss the benefits of preconditioning and regularization in policy optimization, which enable fast global convergence at dimension-free rates.
BiographyDr. Yuejie Chi is a Professor in the department of Electrical and Computer Engineering, and a faculty affiliate with the Machine Learning department and CyLab at Carnegie Mellon University. She received her Ph.D. and M.A. from Princeton University, and B. Eng. (Hon.) from Tsinghua University, all in Electrical Engineering. Her research interests lie in the theoretical and algorithmic foundations of data science, signal processing, machine learning and inverse problems, with applications in sensing and societal systems, broadly defined. Among others, Dr. Chi received the Presidential Early Career Award for Scientists and Engineers (PECASE), the inaugural IEEE Signal Processing Society Early Career Technical Achievement Award for contributions to high-dimensional structured signal processing, held the inaugural Robert E. Doherty Early Career Development Professorship, and was named a Goldsmith Lecturer by IEEE Information Theory Society.
Na Li
Gordon McKay Professor of Electrical Engineering and Applied Mathematics
School of Engineering and Applied Sciences (SEAS)
Harvard University
The Interplay between Learning and Control in Zeroth-order Methods
AbstractThe recent explosion of data from our physical world has stimulated active research in learning, optimization, and control. Though learning and control have their own evolutionary paths, they have interplayed with each other in the past and now. In this talk, we will discuss the synergy between learning and control centering around zeroth-order optimization methods. Firstly, we will talk about how the learning method, zeroth-order optimization, could be used in model-free optimal control in (multiagent) dynamical systems. We will also discuss the optimization landscape of the optimal control for partially observed systems, which is crucial for the success of such learning methods. Then, we will shift our focus to study how control tools such as high and low pass filters could help reduce the variance of zeroth-order learning method and speed up its learning speed.
Joint work with Yujie Tang, Yingying Li, Xin Chen, and Yang Zheng.
BiographyNa Li is a Gordon McKay professor in Electrical Engineering and Applied Mathematics at Harvard University. She received her B.S. degree in Mathematics from Zhejiang University in 2007 and Ph.D. degree in Control and Dynamical systems from California Institute of Technology in 2013. She was a postdoctoral associate at Massachusetts Institute of Technology 2013-2014. Her research lies in the control, learning, and optimization of networked systems, including theory development, algorithm design, and applications to real-world cyber-physical societal systems. She received NSF career award (2016), AFSOR Young Investigator Award (2017), ONR Young Investigator Award(2019), Donald P. Eckman Award (2019), McDonald Mentoring Award (2020), along with other awards.
Angelia Nedich
Professor in the School of Electrical, Computer and Energy Engineering
Arizona State University
Distributed Algorithms for Optimization in Networks
AbstractWe will overview the distributed optimization algorithms starting with the basic underlying idea illustrated on a prototype problem in machine learning. In particular, we will focus on convex minimization problem where the objective function is given as the sum of convex functions, each of which is known by an agent in a network. The agents communicate over the network with a task to jointly determine a minimum of the sum of their objective functions. The communication network can vary over time, which is modeled through a sequence of graphs over a static set of nodes (representing the agents in a system). In this setting, the distributed first-order methods will be discussed that make use of an agreement protocol, which is a mechanism replacing the role of a coordinator. We will discuss some refinements of the basic method and conclude with more recent developments of fast methods that can match the performance of centralized methods.
BiographyAngelia Nedich holds a Ph.D. from Moscow State University, Moscow, Russia, in Computational Mathematics and Mathematical Physics (1994), and a Ph.D. from Massachusetts Institute of Technology, Cambridge, USA in Electrical and Computer Science Engineering (2002). She has worked as a senior engineer in BAE Systems North America, Advanced Information Technology Division at Burlington, MA. She is the recipient of an NSF CAREER Award 2007 in Operations Research for her work in distributed multi-agent optimization. She is a recipient (jointly with her co-authors) of the Best Paper Award at the Winter Simulation Conference 2013 and the Best Paper Award at the International Symposium on Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks (WiOpt) 2015. Also, she is a coauthor of the book Convex Analysis and Optimization. Her current interest is in large-scale optimization, games, control and information processing in networks.
Adam Wierman
Professor in the Department of Computing and Mathematical Sciences (CMS)
California Institute of Technology
Online Optimization and Control using Black-Box Predictions
AbstractMaking use of modern black-box AI tools is potentially transformational for online optimization and control. However, such machine-learned algorithms typically do not have formal guarantees on their worst-case performance, stability, or safety. So, while their performance may improve upon traditional approaches in "typical" cases, they may perform arbitrarily worse in scenarios where the training examples are not representative due to, e.g., distribution shift or unrepresentative training data. This represents a significant drawback when considering the use of AI tools for energy systems and autonomous cities, which are safety-critical. A challenging open question is thus: Is it possible to provide guarantees that allow black-box AI tools to be used in safety-critical applications? In this talk, I will introduce recent work that aims to develop algorithms that make use of black-box AI tools to provide good performance in the typical case while integrating the "untrusted advice" from these algorithms into traditional algorithms to ensure formal worst-case guarantees. Specifically, we will discuss the use of black-box untrusted advice in the context of online convex body chasing, online non-convex optimization, and linear quadratic control, identifying both novel algorithms and fundamental limits in each case.
BiographyAdam Wierman is a Professor in the Department of Computing and Mathematical Sciences (CMS) at the California Institute of Technology. He is the director of the Information Science and Technology (IST) initiative and served as Executive Officer (a.k.a. Department Chair) of CMS from 2015-2020. Additionally, he serves on Advisory Boards of the Linde Institute of Economic and Management Sciences and the “Sunlight to Everything” thrust in the Resnick Institute for Sustainability. He received his Ph.D., M.Sc. and B.Sc. in Computer Science from Carnegie Mellon University in 2007, 2004, and 2001, respectively, and has been a faculty at Caltech since 2007.