# Autonomy, Controls, and Applications

##### Songyuan Zhang

GCBF+: A Novel Framework to Train Neural Graph Control Barrier Functions for Safe Multi-agent ControlDistributed, scalable, and safe control of large-scale multi-agent systems (MAS) is a challenging problem. In this paper, we design a distributed framework for safe multi-agent control in large-scale environments with obstacles, where a large number of agents are required to maintain safety using only local information and reach their goal locations. We introduce a new class of certificates, termed graph control barrier function (GCBF), which are based on the well-established control barrier function (CBF) theory for safety guarantees and utilize a graph structure for scalable and generalizable distributed control of MAS. We develop a novel theoretical framework to prove the safety of an arbitrary-sized MAS with a single GCBF. We propose a new training framework GCBF+ that uses graph neural networks (GNNs) to parameterize a candidate GCBF and a distributed control policy. The proposed framework is distributed and is capable of directly taking point clouds from LiDAR, instead of actual state information, for real-world robotic applications. We illustrate the efficacy of the proposed method through various hardware experiments on a swarm of drones with objectives ranging from exchanging positions to docking on a moving target without collision. Additionally, we perform extensive numerical experiments, where the number and density of agents, as well as the number of obstacles, increase. Empirical results show that in complex environments with nonlinear agents (e.g., Crazyflie drones) GCBF+ outperforms the handcrafted CBF-based method with the best performance by up to 20% for relatively small-scale MAS for up to 256 agents, and leading reinforcement learning (RL) methods by up to 40% for MAS with 1024 agents. Furthermore, the proposed method does not compromise on the performance, in terms of goal reaching, for achieving high safety rates, which is a common trade-off in RL-based methods.

##### Kunal Garg

Hierarchical control for multi-agent controlMulti-agent robotic systems often require control design for a multi-objective problem, such as maintaining a safe distance from other agents as well as obstacles, maintaining network connectivity for building team knowledge, and completing team objectives for performance. Such problems are intractable in the centralized framework for large-scale systems. Thus, a distributed framework is necessary where each agent only requires its neighbors' information while being able to contribute towards completing the team objective. However, a decentralized control framework often leads to a sub-optimal solution, resulting in the system getting stuck in local minima or a deadlock. We address the issue of deadlock resolution via a hierarchical control framework. We propose a high-level planner for temporary goal assignment and a low-level controller that drives the agents to their assigned goals.

##### Charles Dawson

What can we learn from a single failure? Data-driven post-mortem analysis using graduated amortized variational inferencePost-mortem failure analysis --- the process of examining data recorded during a failure to understand what went wrong --- is a critical function for engineers developing complex systems like autonomous vehicles and critical cyberphysical infrastructure. However, it also poses a particular challenge for existing inference algorithms due to the relative lack of data from failure events. In this work, we study the problem of posterior density estimation and generative modeling in this sparse-data regime, proposing a novel algorithm for graduated amortized variational inference (GAVI) for learning a representation that smoothly interpolates between nominal and failure cases. We demonstrate the performance of our method on a range of challenging data-constrained posterior inference problems, including an extensive case study applying our method to a post-mortem analysis of the 2022 Southwest Airlines scheduling crisis.

##### Zeyu Jia

When is Agnostic Reinforcement Learning Statistically Tractable?We study the problem of agnostic PAC reinforcement learning (RL): given a policy class Π, how many rounds of interaction with an unknown MDP (with a potentially large state and action space) are required to learn an ϵ-suboptimal policy with respect to Π? Towards that end, we introduce a new complexity measure, called the spanning capacity, that depends solely on the set Π and is independent of the MDP dynamics. With a generative model, we show that for any policy class Π, bounded spanning capacity characterizes PAC learnability. However, for online RL, the situation is more subtle. We show there exists a policy class Π with a bounded spanning capacity that requires a superpolynomial number of samples to learn. This reveals a surprising separation for agnostic learnability between generative access and online access models (as well as between deterministic/stochastic MDPs under online access). On the positive side, we identify an additional sunflower structure, which in conjunction with bounded spanning capacity enables statistically efficient online RL via a new algorithm called POPLER, which takes inspiration from classical importance sampling methods as well as techniques for reachable-state identification and policy evaluation in reward-free exploration.

# Information and Networks

##### Vallabh Ramakanth

Monitoring Correlated Sources: AoI-based Scheduling is Nearly OptimalWe study the design of scheduling policies to minimize monitoring error for a collection of correlated sources, where only one source can be observed at any given time. We model correlated sources as a discrete-time Wiener process, where the increments are multivariate normal random variables, with a general covariance matrix that captures the correlation structure between the sources. Under a Kalman filter-based optimal estimation framework, we show that the performance of all scheduling policies oblivious to instantaneous error, can be lower and upper bounded by the weighted sum of Age of Information (AoI) across the sources for appropriately chosen weights. We use this insight to design scheduling policies that are only a constant factor away from optimality, and make the rather surprising observation that AoI-based scheduling that ignores correlation is sufficient to obtain performance guarantees. We also derive scaling results that show that the optimal error scales roughly as the square of the dimensionality of the system, even in the presence of correlation. Finally, we provide simulation results to verify our claims.

##### Chenyu Wang

Removing Biases from Molecular Representations via Information MaximizationHigh-throughput drug screening–using cell imaging or gene expression measurements as readouts of drug effect–is a critical tool in biotechnology to assess and understand the relationship between the chemical structure and biological activity of a drug. Since large-scale screens have to be divided into multiple experiments, a key difficulty is dealing with batch effects, which can introduce systematic errors and non-biological associations in the data. We propose InfoCORE, an Information maximization approach for COnfounder REmoval, to effectively deal with batch effects and obtain refined molecular representations. InfoCORE establishes a variational lower bound on the conditional mutual information of the latent representations given a batch identifier. It adaptively reweighs samples to equalize their implied batch distribution. Extensive experiments on drug screening data reveal InfoCORE’s superior performance in a multitude of tasks including molecular property prediction and molecule-phenotype retrieval. Additionally, we show results for how InfoCORE offers a versatile framework and resolves general distribution shifts and issues of data fairness by minimizing correlation with spurious features or removing sensitive attributes.

##### Theodore Grunberg

Error bounds for the linear noise approximation of chemical reaction networksStochastic chemical reaction networks are continuous-time Markov chains that describe the evolution of the molecular counts of chemical species interacting via discrete reaction events. Approximations to this continuous-time Markov chain model that rely on the molecular counts and volume being large, such as the reaction rate equations, a deterministic approximation, and the linear noise approximation, a diffusion approximation, are often used due to their computational and analytical simplicity. However, the validity of the linear noise approximation when applied to stationary distributions has remained an open question. In this work, we use Stein's method to establish non asymptotic error bounds for the linear noise approximation as applied to stationary distributions of stochastic chemical reaction networks. Further, we give sufficient conditions under which the global exponential stability of the reaction rate equations is sufficient to establish the validity of the linear noise approximation. These results have applications to systems and synthetic biology, where the linear noise approximation is especially common.

##### Guoqing Wang

Harnessing quantum control for quantum sensing and simulationQuantum science and technology have been promising fields in the past decades, offering advantages over classical counterparts in difficult computations, simulations, and precision measurements. Pioneering platforms like solid-state spin defects and neutral atom arrays trapped in optical tweezers have led this evolution, ushering in an era where technologies such as nanoscale quantum magnetometers move from laboratories to practical applications. Central to the operational success of these quantum applications is the pivotal role played by quantum control techniques. In this talk, I will introduce how the ongoing development of quantum control techniques is driving the progress of quantum sensing and simulation applications. I will present a few representative works such as the development of a quantum mixer capable of sensing arbitrary frequency signals, a nanoscale sensor for probing vector electromagnetic signals, as well as protection of quantum coherence in large-scale quantum platforms.

# Machine Learning and Statistics

##### Reese Pathak

The LASSO can be suboptimal and why that might matterThe LASSO is popular tool in regression analysis, partly because it encourages sparse parameter estimates. When given very ``nice’’ covariates it is known that the LASSO is statistically optimal. However, what about more general covariates? Surprisingly, the LASSO can actually be highly suboptimal—it can potentially require polynomially more samples than necessary to achieve a given error level. Fortunately, we show that this issue can be easily remedied by using different, simple, computationally-efficient procedure based on soft-thresholding. In this talk, we present theoretical results, numerical simulations, and discuss why our observation could be relevant to emerging ML applications involving distribution shift. Based on joint work with Cong Ma (U Chicago).

##### Abhin Shah

Doubly Robust Inference in Causal Latent Factor ModelsWe introduce a new framework for estimating treatment effects under unobserved confounding in modern data-rich environments featuring a large number of units and outcomes. Our approach uses matrix completion as a key subroutine, and combines outcome imputation and inverse probability weighting resulting in a doubly robust estimator. We derive finite-sample and asymptotic guarantees, and show that the error of our estimator converges to a mean-zero Gaussian distribution at a parametric rate. Our method utilizes a novel cross-fit procedure, which enables us to only require making black-box conditions on the underlying matrix completion algorithm.

##### Hannah Schlueter

Detecting lineage-related gene expression patternsRecent barcoding technologies allow reconstructing lineage trees of embryos or tumors while capturing paired single-cell RNA-sequencing (sc-RNA-seq) data. Such datasets provide opportunities to identify lineage-informative genes or subtrees where lineage and expression are tightly coupled, and compare gene expression memory maintenance through divisions across different biological systems. We develop a novel computational approach to these problems combining representation learning, a permutation strategy, and tree-likeness scores. We validate our method using synthetic data and apply it to recent paired lineage and sc-RNA-seq data of lung cancer and embryogenesis obtained from a mouse model. Our method pinpoints subtrees giving rise to metastases or new cell states, and genes identified as most informative about lineage overlap with known pathways involved in lung cancer progression. Further, our method highlights differences in how gene expression memory is maintained through divisions in cancer and development, thereby providing a tool for studying cell state memory across biological systems.

##### Kaan Sel

Physics-informed Neural Networks for Modeling Physiological Time SeriesScientific machine learning is an emerging interdisciplinary field that combines scientific principles, physical laws, methods, and models with machine learning and artificial intelligence (AI) techniques to solve complex scientific problems. One particular technique is physics-informed neural networks (PINNs) that guide AI training through incorporating the existing physical laws and constraints defined for the system into the neural network loss function. PINNs can overcome certain challenges in precision health applications mainly arising from the low availability of ground truth data that makes most state-of-the-art AI techniques lack robustness, rendering them ineffective. A particular example is cuffless blood pressure estimation using advanced wearables, where access to ground truth data for model optimization collected from abnormal states (i.e., hypertension) at personalized levels is challenging, burdensome, and in some cases infeasible. In this talk, we will introduce PINNs specifically designed for time series modeling to extract complex cardiovascular information with reduced dependency on ground truth data. We achieve this by building Taylor’s approximation for gradually changing known cardiovascular relationships between input and output (e.g., wearable sensor measurements to blood pressure) and incorporating this approximation into our proposed neural network training leveraging the inherent auto-differentiation process. We demonstrate the effectiveness of the framework through a case study on continuous cuffless blood pressure estimation from time series wearable bioimpedance data and show that by using PINNs over the state-of-the-art time series models tested on the same datasets, we retain high accuracies while significantly reducing the necessary amount of ground truth training data. PINNs for physiological time series could be helpful in developing future AI algorithms to help interpret pervasive physiologic data using minimal amounts of training data and may also enable verification, validation and uncertainty quantification of neural network models to build confidence and trust in their predictions.

##### Mehrdad Ghadiri

Finite Population Regression Adjustment and Non-asymptotic Guarantees for Treatment Effect EstimationThe design and analysis of randomized experiments is fundamental to many areas, from the physical and social sciences to industrial settings. Regression adjustment is a popular technique to reduce the variance of estimates obtained from experiments, by utilizing information contained in auxiliary covariates. While there is a large literature within the statistics community studying various approaches to regression adjustment and their asymptotic properties, little focus has been given to approaches in the finite population setting with non-asymptotic accuracy bounds. Further, prior work typically assumes that an entire population is exposed to an experiment, whereas practitioners often seek to minimize the number of subjects exposed to an experiment, for ethical and pragmatic reasons. In this work, we study the problems of estimating the sample mean, individual treatment effects, and average treatment effect with regression adjustment. We propose approaches that use techniques from randomized numerical linear algebra to sample a subset of the population on which to perform an experiment. We give non-asymptotic accuracy bounds for our methods and demonstrate that they compare favorably with prior approaches.

# Algorithms, Optimization, and Game Theory

##### Rohan Alur

Distinguishing the Indistinguishable: Human Expertise in Algorithmic PredictionWe introduce a novel framework for incorporating human expertise into supervised prediction. Our approach focuses on the use of human judgment to refine predictions by distinguishing inputs which `look the same' to any feasible predictive algorithm. We argue that this framing clarifies the problem of human/AI collaboration in prediction tasks, as experts often have access to information -- particularly subjective information -- which is not present in the training data. We use this insight to incorporate human feedback in a way that provably improves the performance of any feasible learning algorithm. We find empirically that although algorithms often outperform their human counterparts on average, human judgment can significantly improve algorithmic predictions on specific instances (which can be identified ex-ante). In an X-ray classification task, we find that this subset constitutes nearly 30% of the patient population. Our approach provides a natural way of uncovering this heterogeneity and thus enabling effective human-AI collaboration.

##### Sean Sinclair

Online Fair Allocation of Perishable ResourcesWe consider a practically motivated variant of the canonical online fair allocation problem: a decision-maker has a budget of resources to allocate over a fixed number of rounds. Each round sees a random number of arrivals, and the decision-maker must commit to an allocation for these individuals before moving on to the next round. In contrast to prior work, we consider a setting in which resources are perishable. The goal is to construct a sequence of allocations that is envy-free and efficient. We design an algorithm that takes as input (i) a prediction of the perishing order, and (ii) a desired bound on envy. Given the remaining budget in each period, the algorithm uses forecasts of future demand and perishing to adaptively choose one of two carefully constructed guardrail quantities. We characterize conditions under which our algorithm achieves the optimal envy-efficiency Pareto frontier. Our results emphasize the importance of high-fidelity predictions of future perishing, all the while highlighting classes of perishable goods for which a decision-maker cannot hope to achieve classical notions of fairness.

##### Vishwak Srinivasan

Sampling from constrained spaces using the Metropolis-adjusted Mirror Langevin algorithmIn this talk, I will propose a new method called the Metropolis-adjusted Mirror Langevin algorithm for approximate sampling from distributions whose support is a compact and convex set. This algorithm adds an accept-reject filter to the Markov chain induced by a single step of the mirror Langevin algorithm, which is a basic discretisation of the mirror Langevin dynamics. Due to the inclusion of this filter, the proposed method is unbiased relative to the target, while known discretisations of the mirror Langevin dynamics including the mirror Langevin algorithm have an asymptotic bias. We give upper bounds for the mixing time of this new algorithm, and corroborate our theoretical findings with some numerical experiments.

##### Jiayu (Kamessi) Zhao

Two-Sided Flexibility in Matching PlatformsWe study a parsimonious model of a two-sided matching platform, wherein nodes form random edges in a bipartite graph before a centralized market designer finds a maximum matching. In order to increase the size of the matching, the designer can (at a cost) incentivize parts of the market to be more flexible, thus increasing the chance that they form edges. Surprisingly, in many instances of interest, the optimal solution incentivizes nodes on only one side of the market to be flexible, even when the instance is entirely symmetric. However, when both sides are managed by distinct managers with the same objective, they may face a sub-optimal Nash equilibrium in which they incentivize their respective sides equally. Our work initiates the study of multi-sided platform flexibility, and identifies matching dynamics that govern optimal flexibility allocations.

##### Zikai Xiong

Improving the Geometry of Linear Optimization Problems for the Restarted PDHG MethodThe restarted primal-dual hybrid gradient (PDHG) method (with heuristic enhancements and GPU implementations) has shown significant success in solving huge-scale LP problems. Our recent research has shown that the performance of the restarted PDHG depends on certain condition measures of the LP related to the “limiting error ratios” and the LP sharpness. These condition measures can take on extreme values and lead to poor performance (both in theory and in practice), which begs the question of whether these geometric condition measures can be improved by applying suitable transformations of the given LP instance? In this study we present a new geometric measure based on the primal-dual sublevel set and show in particular how re-scaling can theoretically improve these condition measures and improve practical performance as well. Our theoretical development leads to guidelines for practical implementation of preconditioning. Our experiments on LP relaxations of the MIPLIB 2017 dataset demonstrate the impact of re-scaling on the actual performance of PDHG. Finally, these results can also be extended to more general conic optimization problems.