Student Session on Communications



Rajat Talak Broadcast in Large Scale Mobile Wireless Networks

Broadcasting is a data network operation in which a node transmits packets to all other nodes in the network. Several applications arise of such operations, such as, broadcasting state/control updates, alert messages, live video streaming etc. We consider the problem of broadcasting in a mobile wireless network in which every node has to broadcast messages to all other nodes in the network. We obtain scaling results on throughput and delay as the number of nodes grows to infinity.

We consider a cell partitioned network. Node mobility is modeled according to the independent identically distributed (IID) mobility model, i.e., in each time slot the location of a node is uniformly distributed across cells, independent of it's past and other node's locations.

We characterize the network capacity and minimum delay. We show that a first-come-first-serve (FCFS) packet flooding scheme nearly achieves both, the capacity and the minimum delay. This obviates the previous speculation of a capacity-delay tradeoff, i.e. a single scheme may not achieve both capacity and minimum delay, for broadcast in mobile wireless network. Our analysis use recent results on flooding time bounds for Markov evolving graphs (MEG). We also provide some new flooding time bounds for MEGs.


Rahul Singh Decentralized Throughput Optimal Scheduling for Multihop Wireless Networks

We consider the problem of designing throughput maximizing policies for multi-hop wireless networks in which the data packets have to reach their destination nodes within a pre-specified deadline. The policies we obtain are "highly decentralized" in the sense that the routing-scheduling decisions are "packet-based". Thus each node in the network can make decisions based on the state (age) of the packet. The policies obtained by us provide strict guarantees on delays, which is in contrast with the existing policies such as the CSMA.


Hajir Roozbehani Some results on non-binary $(\alpha,\beta)$-maps

A mapping of k symbols to n symbols is said to be alpha-beta if it sends any two strings of relative distance more than alpha to two strings of relative distance more than beta. In a communication mindset, one may view alpha as a measure of redundancy in the source of the data and beta as a measure of robustness to adversarial noise. This talk offers some new results on the alpha-beta tradeoffs. The main technique used is the diametric theorem of Ahlswede and Khachatrian, which when combined with standard converse bounds gives sharp estimates for the extremal alpha-beta pairs over large alphabets. In this regime, our results can be leveraged to construct good alpha-beta maps out of good error correcting codes.


Austin Collins TBD

TBD




Student Session on Learning and Optimization



Chong Yang Goh Contextual decision rule learning in convex problems: online and stochastic approaches

We consider convex optimization under uncertainty where prior to decision-making, one observes a context vector ("features") that may be predictive of the objective function, which is convex but its parametrization is unknown at the time of decision. Given past observations, the goal is to learn a decision rule or policy that maps a context vector to a decision. This setup encompasses a range of applications in machine learning and operations research, such as regression, online advertising and revenue management. In this talk, we frame this problem under two settings: (i) online learning (ii) statistical learning, and propose learning algorithms that are consistent according to some learnability criteria in each setting. In the latter setting, we also characterize error bounds for the algorithms under several families of convex objective functions. Crucially, these algorithms and bounds are applicable to a general class of probability distributions, with no parametric assumptions imposed a priori on the set of optimal (Bayes) decision rules, i.e., the methods are based on nonparametric modeling.


Nirav Bhan Combating Inventory Inaccuracy using Transactions via Hidden Markov Models

Inventory Inaccuracy is a huge problem in retail. Stores often have erroneous inventory records, due to which shelves are not restocked on time. These stockouts result in financial losses for the store as well as customer dissatisfaction. We provide a way to combat this inefficiency using Hidden Markov Models. Our intuition is that the frequency of sales contains some information about the inventory level. We formalize this by providing an HMM description of the retail process, where sales are our observed variables and inventory is the hidden variable. We show that it is possible to learn all parameters of this HMM just from transaction data, using estimators which are both statistically and computationally efficient. This leads to obtaining the inventory level, as well as market parameters such as demand and restocking times. Our work also motivates consideration of a new class of HMMs, for which the hidden state is completely learnable.


Aviv Adler The Traveling Salesman Problem for Dynamically-Constrained Vehicles

In the classic Traveling Salesman Problem (TSP), we are given a set of target points and wish to find the shortest path which visits all of them. This problem is embedded and essential in many planning problems that arise in robotics, particularly in the domains of exploration, monitoring, surveillance, and reconnaissance. We consider the important case of the Stochastic TSP for kinematic vehicles, in which a collection of random points must be visited by a vehicle obeying dynamic constraints. This problem has been open for decades, and had only been solved for a handful of specific vehicle, such as the Dubins car. We solve this problem in general by providing tight (up to a constant factor) bounds on the length of the shortest tour, which hold with high probability as the number of points increases.


Dogyoon Song Blind Regression: Revisiting Collaborative Filtering

Classical supervised learning (or regression analysis) provides framework to learn/estimate the relationships among variables. However in many modern applications, especially with social data, we only observe the outputs (responses) while no meaningful information on the inputs but their identities are captured. Yet, we would still like to perform regression to learn the function governing behavior.

In this work, we revisit the classic Netflix challenge and will point out the link between (memory-based) collaborative filtering and kernel regression, which shows that many collaborative filtering methods can be viewed as performing kernel regression with specially designed distance proxy over the inputs. We suggest a novel approach for evaluating the performance of recommendation systems, based on concentration inequalities. Upon the results, we also propose how to design collaborative filtering methods and will show that we can achieve an algorithm, which is asymptotically reliable under some conditions. In addition, we will briefly summarize some experimental findings from MovieLens and Netflix datasets.




Student Session on Networks



Martin Zubeldia Delay, memory, and messaging tradeoffs in distributed service systems

We consider the following distributed service model: jobs with unit mean, exponentially distributed and independent processing times arrive as a Poisson process of rate $\lambda$ N, with $0<\lambda<1$, and are immediately dispatched to a queue associated with one of N identical servers. We assume that the dispatching decisions are made by a central dispatcher endowed with a finite memory, and with the capacity to exchange messages with the servers.

We study the fundamental limits in the amount of resources (memory bits and message exchange rate) needed, in order to drive the expected queueing delay of a typical job to zero, as N increases. We construct a policy which drives the delay to zero when either (i) the message rate grows superlinearly with N, or (ii) the memory grows superlogarithmically with N. The analysis of this policy uses the fluid model technique. Moreover, we show that any policy that has a certain symmetry property, and for which neither condition (i) or (ii) holds, results in an expected queueing delay which is bounded away from zero.

Finally, using the fluid limit approach once more, we show that when our policy only utilizes a linear message rate ($\alpha$ N) and a logarithmic memory, the limiting expected queueing delay (although positive) is uniformly upper bounded for all $\lambda<1$, for any linear message rate (i.e., for any $\alpha>0$), no matter how small. This is a significant improvement over the popular 'power-of-d-choices' policy, which has a limiting expected delay that grows as $log[1/(1-\lambda)]$ when $\lambda$ tends to 1.


Qingkai Liang Coflow Scheduling in Switched Networks: Optimal Delay Scaling and Algorithms

A coflow is a collection of parallel flows belonging to the same job. It has the all-or-nothing property: a coflow is not complete until the completion of all its flows. In this work, we focus on optimizing coflow-level delay, i.e., the time to complete all the flows in a coflow, in the context of an N-by-N input-queued switch. In particular, we are interested in a scheduling policy that achieves the best scaling of coflow-level delay as N goes to infinity. We first develop lower bounds on the coflow-level delay that can be achieved by any scheduling policy. It is observed that these lower bounds critically depend on the variability of coflow sizes. Then we demonstrate that some coflow-agnostic scheduling policies (e.g., randomized scheduling) may lead to undesirable coflow-level performance, which highlights the importance of coflow-awareness. Finally, it is shown that the optimal coflow-level delay scaling is achievable with the Coflow-Aware Batching (CAB) policy under some mild assumptions.


Mathieu Dahan Network Flow Routing under Strategic Link Disruptions

We consider a 2-player strategic game for network routing under link disruptions. Player 1 (defender) routes flow through a network to maximize her value of effective flow while facing transportation costs. Player 2 (attacker) simultaneously disrupts one or more links to maximize her value of lost flow but also faces cost of disrupting links. This game is strategically equivalent to a zero-sum game. Linear programming duality and the max-flow min-cut theorem are applied to obtain properties that are satisfied in any mixed Nash equilibrium. In any equilibrium, both players achieve identical payoffs. While the defender's expected transportation cost decreases in attacker's marginal value of lost flow, the attacker's expected cost of attack increases in defender's marginal value of effective flow. Interestingly, the expected amount of effective flow decreases in both these parameters. These results can be viewed as a generalization of the classical max-flow with mini- mum transportation cost problem to adversarial environments.


Zhenyu Liu Node Deployment for Network Localization

TBD



Student Session on Control




David Miculescu Polling-systems-based Control of High-performance Provably-safe Autonomous Intersections

The rapid development of autonomous vehicles spurred a careful investigation of the potential benefits of all-autonomous transportation networks. Most studies conclude that autonomous systems can enable drastic improvements in performance. A widely studied concept is all-autonomous, collision-free intersections, where vehicles arriving in a traffic intersection with no traffic light adjust their speeds to cross through the intersection as quickly as possible. In this talk, we propose a coordination control algorithm, assuming stochastic models for the arrival times of the vehicles. The proposed algorithm provides provable guarantees on safety and performance. More precisely, it is shown that no collisions occur surely, and moreover a rigorous upper bound is provided for the expected wait time. The algorithm is also demonstrated in simulations. The proposed algorithms are inspired by polling systems. In fact, the problem studied in this paper leads to a new polling system where customers are subject to differential constraints, which may be interesting in its own right.


Valerio Varricchio TBD

TBD

Shreya Saxena Making Fast Movements - Considerations in Feedback Models

TBD


Beipeng Mu Inference on graphical model with dynamic nodes -- application in robotics

Factor graphs have gain lots of success and popularity in simultaneously localization and mapping problem. It captures sparsity of geometry constraints between robot poses and landmarks, therefore have greatly improved inference efficiency over traditionally used occupancy grid maps. However, landmarks are not always well labelled in reality. Inference on factor graphs with imperfect labels often leads to divergence or big errors. This work uses nonparametric models to jointly infer measurement labels and maximal likelihood of factor graphs.




Student Session on Probability and Learning Theory



Trevor Campbell Truncated Completely Random Measures

Bayesian nonparametrics (BNP) provides flexible models whose complexity can grow as more data are observed. For instance, we expect to see a greater diversity of topics as we read more documents from the New York Times, or a greater number of friend groups as we look at more individuals in a social network. This flexibility is due to the generation of a countably infinite collection of traits in the prior, which leads to an infinite-dimensional latent parameter that is difficult to simulate and infer. Finite truncation provides a simple option for simulation or posterior inference with, e.g., Markov chain Monte Carlo or variational methods, but induces some amount of model error. Past work on quantifying this error has been limited to model-specific results. This talk will discuss a new generalized truncation error bound and simulation algorithm for completely random measures, and provide comparison with existing results.


Qingqing Huang Recovering Structured Probability Matrices

We consider the problem of recovering a matrix B of size M × M, which represents a probability distribution over M^2 outcomes, given access to independent sample "counts" generated according to the distribution B. How can structural properties of the underlying matrix B be leveraged to yield computationally efficient and information theoretically optimal reconstruction algorithms? When can accurate reconstruction be accomplished in the sparse data regime?

This basic problem lies at the core of a number of questions that are currently being considered by different communities, including community detection in sparse random graphs, learning structured probabilistic models such as topic models and hidden Markov models, as well as the efforts from the natural language processing community to compute "word embeddings". Many aspects of this problem -- both in terms of learning and property testing, and on both the algorithmic and information theoretic sides --remain open.

Our results apply to the setting where the M × M probability matrix B is of rank 2. We propose an efficient algorithm that accurately recovers the underlying matrix using \Theta(M) samples. The linear sample complexity is optimal, up to constant factors, in an extremely strong sense: even testing basic properties of the underlying matrix, such as whether it has rank 1 or 2, requires \Omega(M) samples.


Jennifer Tang Improving Speed and Reliability of SAR ADCs

TBD


Quan Li Finding a Large Sub-matrix of a Gaussian Random Matrix

Iterative Search Procedure is a computationally efficient approach for finding large average sub-matrices of a real-valued matrix in the exploratory analysis of high dimensional data. It alternately updates rows and columns until no further improvement can be obtained in terms of the average of the $k\times k$ sub-matrix. However, there is no theoretical understanding of whether this procedure converges to a global maximum. In this paper, we present first theoretical analysis of the performance of the Iterative Search Procedure for finding large average $k \times k$ sub-matrix in Gaussian random $n \times n$ matrices. We show that the Iterative Search Procedure proceeds $O(\log n)$ steps with high probability (w.h.p.), and converges to a local maximum sub-matrix w.h.p.. More specifically, while the average of the global maximum sub-matrix is known to be $\sqrt{4 \log n / k}(1+o(1))$, the average of the $k\times k$ sub-matrix the Iterative Search Procedure converges to is $\sqrt{2\log n / k}(1+o(1))$ w.h.p., thus leading to a constant factor gap.



Poster Session.

Amir Ajorlou Dynamic Pricing in Social Networks: The Word of Mouth Effect

We study the problem of optimal dynamic pricing for a monopolist selling a product to consumers in a social network. The only means of spread of information about the product is via Word of Mouth communication. We show that, in line with the real world evidence from smartphone applications, the optimal dynamic pricing policy for durable products drops the price to zero infinitely often, giving away the immediate profit in full to expand the informed network in order to exploit it in future.


Diego Cifuentes Sampling varieties for sum of squares programs

We study sum of squares (SOS) relaxations to decide the nonnegativity of a polynomial p on the set V \cap R^n , where V is a complex algebraic variety. We propose a new methodology that, rather than relying on some algebraic description, represents V with a generic set of complex samples. This approach depends only on the geometry of V, avoiding representation issues such as multiplicity and choice of generators. It also takes advantage of the dependencies in the coordinate ring of V to reduce the size of the corresponding semidefinite program (SDP). In addition, the input can be given as a straight-line program. Our methods are particularly appealing for varieties which are easy to sample from but for which the defining equations are complicated, such as SO(n), Grassmannians or rank k tensors. Nonetheless, for arbitrary varieties we can obtain the samples by using the tools of numerical algebraic geometry. In this way we connect the areas of SOS optimization and numerical algebraic geometry.


Qingqing Huang Super-Resolution Off the Grid

Super-resolution is the problem of recovering a superposition of point sources using bandlimited measurements, which may be corrupted with noise. Of particular interest is in obtaining estimation procedures which are robust to noise, with a (quantifiably) small number of coarse Fourier measurements (bounded by some \emph{cutoff frequency})

Suppose we have $k$ point sources in $d$ dimensions, where the points are separated by at least $\Delta$ from each other (in Euclidean distance). This work provides an algorithm with the following favorable guarantees: 1. The algorithm uses Fourier measurements, whose frequencies are bounded by $O(1/\Delta)$ (up to log factors). Previous algorithms require a \emph{cutoff frequency} which may be as large as $\Omega(\sqrt{d}/\Delta)$. 2. The number of measurements taken by and the computational complexity of our algorithm are bounded by a polynomial in both the number of points $k$ and the dimension $d$, with \emph{no} dependence on the separation $\Delta$. In contrast, previous algorithms depended inverse polynomially on the minimal separation and exponentially on the dimension for both of these quantities.


Ian Schneider Endogenous Error Pricing for Energy Imbalance Settlements

This research models the behavior of energy market participants under an endogenous imbalance pricing mechanism. The results suggest that players choose strategies in equilibrium that minimize the expected value of their post- curtailment errors, which leads to better performance on metrics of system efficiency and risk versus the optimal strategies that result from exogenous random penalties. We formulate a two stage game where players place ex-ante bids for energy production and then have the opportunity to curtail their realized output, and we find the equilibrium curtailment behavior and the equilibrium producer bids when information in the curtailment process is suitably restricted.


Sze Zheng Yong Robust & Secure Estimation of Smart Infrastructures

Smart infrastructures sustain the operation of many critical processes that the modern society relies on, such as the electric power grids and water distribution networks. Any operational errors and external attacks on these critical infrastructures can have disastrous consequences. Hence, the goal of this project is to keep smart infrastructures secure and resilient to external attacks, as well as robust to modeling errors for ensuring uninterrupted service.

First, we design a resilient state estimator that solves two previously unaddressed issues in ensuring secure estimation of infrastructure systems: (i) switching attacks (attacks of switching mechanisms), and (ii) presence of stochastic process and measurement noises, in addition to the commonly studied (iii) actuator and sensor signal attacks. The proposed resilient estimator provides guarantees for obtaining asymptotically unbiased estimates. Then, we address the issue of robustness to modeling errors in state estimation with external attacks by showing the equivalence of robustification and regularized optimization, which enables the use of off-the-shelf convex optimization tools. The effectiveness of the estimators we developed is demonstrated in simulations of the smart grid using a three-area power station as well as an IEEE 14-bus power network.