We are delighted to have with us as distinguished plenary speakers:

Ali Jadbabaie,
Alfred Fitler Moore Professor of Network Science,
University of Pennsylvania.

Learning and Coordination in Social Networks

Abstract. We present a model of social learning and information aggregation where agents in a social network would like to learn a true state of the world using a stream of private information and exchanges of opinions (represented by beliefs) with their neighbors. Motivated by the practical difficulties of Bayesian updating of beliefs in a network setting, we study a variant of the seminal model of DeGroot, according to which agents linearly combine their personal experiences with the views of their neighbors. We examine how the structure of a social network and the quality of information available to different agents determine the speed of social learning and show that a bound on the rate of learning has a simple analytical characterization in terms of the relative entropy of agents' signal structures and their eigenvector centralities, a measure of importance of nodes in the network. The proposed characterization establishes that the way information is dispersed throughout the social network has non-trivial implications for the rate of learning. Next, we will discuss and study a repeated game in which a group of players attempt to coordinate on a desired, but only partially known, outcome. Agents have an incentive to correctly estimate the state, while trying to coordinate with and learn from others, similar to a Keynesian Beauty Contest game. We will show that myopic but Bayesian agents who repeatedly play this game and observe the actions of their neighbors over a network (that satisfies some weak connectivity condition) eventually succeed in coordinating on a single action. We also show that if the agents' private observations are not functions of the history of the game, the private observations are optimally aggregated in the limit. (Joint work with Pooya Molavi and Alireza Tahbaz-Salehi)

Biography. Ali Jadbabaie received his Bachelors degree with High Honors in Electrical Engineering (with a focus on Control Systems) from Sharif University of Technology, Tehran, Iran, in 1995.  After working as a control engineer for a year, he moved to the US, where he received a Masters degree in Electrical and Computer Engineering from the University of New Mexico, Albuquerque in 1997 and a Ph.D. in Control and Dynamical Systems from California Institute of Technology (Caltech) in 2001. From July 2001-July 2002 he was a postdoctoral scholar at the department of Electrical Engineering at Yale University. Since July 2002 he has been at the University of Pennsylvania, Philadelphia, PA, where he is the Alfred Fitler Moore Professor of Network Science in the department of Electrical and Systems Engineering, with secondary appointments in departments of Computer & Information Sciences and Operations and Information Management (in the Wharton School). He is the director of the Raj and Neera Singh Program in Networked and Social Systems (NETS) at Penn Engineering, a new interdisciplinary undergraduate degree program that blends Network Science, Operations Research, Economics, and Computer Science with Information and Decision Systems. He is an IEEE Fellow, and the recipient of an NSF Career award, ONR Young Investigqtor Award, the O. Hugo Schuck Best Paper award of the American Automatic Control Council, and the George S. Axelby Outstanding Paper Award of the IEEE Control Systems Society. His current research interests are on multi-agent coordination and control, network science and optimization, with applications to social and economic networks and collective robotics.

Radhika Nagpal,
Fred Kavli Professor of Computer Science,
Harvard University.

Taming the Swarm

Abstract. Biological systems, from cells to social insects, get tremendous mileage from the cooperation of vast numbers of cheap, unreliable, and limited individuals. What would it take to create our own artificial collectives of the scale and complexity that nature achieves? In this talk, I will discuss one of our recent and ongoing endeavors - the Kilobot project - a 1024 ("kilo") robot swarm testbed for studying collective intelligence. I will describe some of the challenges for building and programming robot swarms at this scale, and I will discuss how we have used the Kilobot swarm to study collective algorithms inspired by both engineering (e.g. coordinate system formation) and nature (collective transport, flocking, self-assembly). A central theme for our work is understanding the global-to-local relationship: how complex and robust collective behavior can be systematically achieved from large numbers of simple agents.

Biography. Radhika Nagpal is the Kavli Professor of Computer Science at Harvard University and a core faculty member of the Wyss Institute for Biologically Inspired Engineering. She received her PhD degree in Computer Science from MIT, and spent a year as a research fellow at the Harvard Medical School. At Harvard she leads the self-organizing systems research group and her research interests span computer science, robotics, and biology.

Anant Sahai,
Associate Professor of Electrical Engineering and Computer Science,
UC Berkeley.

Glimpses into an Information Theory for Control

Abstract. Control is an intellectual sibling to communication. Both are about removing uncertainty with limited resources --- communication by sharing something about the world and control by shaping the world itself. While information theory has for decades been providing insights into problems of communication, traditional approaches to control did not use information-theoretic techniques or ideas. Recently, we have found some surprising connections between wireless information theory and some central (and long-open) problems in decentralized control.

Just as most problems in multiterminal information theory are open, so are most problems in decentralized control. These problems resist all standard control-theoretic attacks based on linear systems theory and optimization because they seem to be hopelessly nonconvex in all but corner cases and nonlinear control strategies are needed to achieve optimal performance. It turns out that the machinery of linear deterministic models that has been so helpful in understanding problems of relaying and interference in communication can be brought to shed light on the fundamental limits of performance in decentralized control.

Approximately-optimal strategies can be found and the control-theoretic counterparts to ideas like generalized degrees-of-freedom and cut-set bounds can be discovered. There are control/estimation counterparts to ideas like non-coherent communication channels. And even some classical results in decentralized LTI control theory can be reinterpreted as really being about network coding in the complex field. All this suggests that there is an entire parallel realm of information theory that connects to control problems --- just waiting to be explored. This talk will give some glimpses into this.

Biography. Anant Sahai received his B.S. in 1994, from the University of California, Berkeley, and his S.M. and Ph.D. from MIT in 1996 and 2001, respectively. He is an associate professor in the EECS Department at Berkeley, where he joined as an assistant professor in 2002. Prior to that, he spent a year as a research scientist at the wireless startup Enuvis (a company founded by John Tsitsiklis and Ben Van Roy, another LIDS alum) in South San Francisco, developing software-radio signal-processing algorithms to enable very sensitive GPS receivers for indoor operation. From 2007 to 2009, he was the treasurer for the IEEE Information Theory Society. His current research interests are at the intersection of information theory and decentralized control, as well as in wireless communication, particularly dynamic spectrum sharing and its regulatory dimensions. He enjoys working very closely with his small group of graduate students on fun and deep problems. He usually teaches small intimate courses but this semester, is teaching a giant intro course with hundreds of students.

Gerald Jay Sussman,
Panasonic Professor of Electrical Engineering,
Massachusetts Institute of Technology.

We Really Don't Know How to Compute!

Abstract. We have been building and programming computing machines for about 60 years (over 50 years for me!), and we have seen lots of progress. We can now compute absurdly fast on enormous data sets. We have learned a great deal about composition and abstraction, so we can conquer great complexity. But we have just begun to scratch the surface.

A quick look at the natural world shows that we have a lot to learn. A mammalian neuron takes about ten milliseconds to respond to a stimulus. A human driver can respond to a visual stimulus in a few hundred milliseconds, and decide an action, such as making a sharp turn or activating the brake. So the computational depth of this behavior is only a few tens of steps. Imagine the power if we could make such a machine out of electronics, which is vastly faster than neurons! But, we don't know how to make such a machine, and we don't know how to program one if we had it.

As a further embarrassment, note that the human genome is about 1GB of information. So the information required to build a human from a single, undifferentiated eukariotic cell is about 1GB -- a bit smaller than the information required to build a modern operating system. We see that the instructions to build a mammal are written in very dense code. But more importantly, the program is very flexible. Only small patches to the human genome are required to build a cow or a dog rather than a human. Bigger patches result in a frog or a snake. At this point we don't have the foggiest notions how to make a description of such a complex machine as a mammal that is both dense and flexible, with lots of code reuse from earlier versions.

Our idea of system security is similarly archaic. We are continually attacked by mutating parasites, but we can survive for about four score and ten years. How can this possibly work, with only 1GB in the code base?

Our languages, and the principles that they are organized around, are woefully inadequate for capturing processes like these. We will need new design principles and new linguistic support for expressing our designs. I will address this issue and show some ideas that can perhaps get us to the next phase of engineering design.

(Shortened) Biography. Gerald Jay Sussman is the Panasonic (formerly Matsushita) Professor of Electrical Engineering at MIT. He received the S.B. and the Ph.D. degrees in mathematics from the MIT in 1968 and 1973, respectively. He has been involved in artificial intelligence research at M.I.T. since 1964. His research has centered on understanding the problem-solving strategies used by scientists and engineers, with the goals of automating parts of the process and formalizing it to provide more effective methods of science and engineering education. Sussman has also worked in computer languages, in computer architecture and in VLSI design.

Among his contributions, Sussman is a coauthor (with Hal Abelson and Julie Sussman) of "Structure and Interpretation of Computer Programs", the introductory computer science textbook used at M.I.T. As a result of this and other contributions to computer-science education, Sussman received the ACM's Karl Karlstrom Outstanding Educator Award in 1990, and the Amar G. Bose award for teaching in 1991. Over the past decade Sussman and Wisdom have developed a subject that uses computational techniques to communicate a deeper understanding of advanced Classical Mechanics. Sussman and Jack Wisdom, with Meinhard Mayer, have produced a textbook, "Structure and Interpretation of Classical Mechanics," to capture these ideas. Sussman and Wisdom have also presented a functional programmer's view of differential geometry in their monograph "Functional Differential Geometry."

Sussman is a fellow of the Institute of Electrical and Electronics Engineers (IEEE). He is a member of the National Academy of Engineering (NAE), a fellow of the American Association for the Advancement of Science (AAAS), a fellow of the Association for the Advancement of Artificial Intelligence (AAAI), a fellow of the Association for Computing Machinery (ACM), a fellow of the American Academy of Arts and Sciences, and a fellow of the New York Academy of Sciences (NYAS). Sussman is a founding director of the Free Software Foundation. He has been a bonded locksmith. He is a life member of the American Watchmakers-Clockmakers Institute (AWI), a member of the Massachusetts Watchmakers-Clockmakers Association, a member of the Amateur Telescope Makers of Boston (ATMOB), and a member of the American Radio Relay League (ARRL).
[Link to Original Version]