We live in a world characterized by massive information transfer and real-time communication. The demand for efficient yet low-complexity algorithms is widespread across different fields, including machine learning, signal processing, and communication networks. Most of the problems that we encounter across these disciplines involves a large number of modules interacting with each other. It is therefore natural to represent these interactions and the flow of information between the modules in terms of a graph. The Reasearch in the I.I.D Systems group centers around the scaling laws and applications of “graph-based information processing”. We study the behaviour of large scale networks (ranging from biomedical to social networks) as a function of underlying parameters. We also developed efficient algorithms to solve fundamental tasks related to such systems.

Interactive Recommender Systems

Searching in a database of objects by using comparisons has recently received considerable attention. Contrary to traditional databases, users do not submit queries that are subsequently matched to objects. Instead, at each step, the database presents two objects to the user, who then selects among the pair the object closest to the target that she has in mind. This process continues until, based on the user’s answers, the database can identify the target she has in mind. This kind of interactive navigation has numerous real-life applications such as navigation in a database of human pictures.

From Small-World Networks to Comparison-Based Search

Amin Karbasi, Stratis Ioannidis, Laurent Massoulie

Comparison-Based Learning with Rank Nets

Amin Karbasi, Stratis Ioannidis, Laurent Massoulie

Content Search Through Comparisons

Amin Karbasi, Stratis Ioannidis, Laurent Massoulie

Hot or Not: Interactive Content Search Using Comparisons

Amin Karbasi, Stratis Ioannidis, Laurent Massoulie

Sparsity Pattern Recovery with Graph Constraints

Inverse problems, with the goal of recovering a signal from partial and noisy observations, come in many different formulations and arise in many applications. One important property of an inverse problem is that it should be well-posed, i.e., there should exist a unique and stable solution to the problem. In this regard, prior information about the solution, like sparsity, can be used as a ‘‘regularizer’’ to transform an ill-posed problem to a well-posed one. In a series of works, we looked at a well-studied inverse problem in statistics, called group testing, which involves detecting a small set of defective items in a large population by grouping ubset of items into different pools. The result of each pool is a binary output depending on whether the pool contains a defective item or not. The main challenge is to minimize the number of pools required to identify all defective items. Inspired by applications in network monitoring and infection propagation, we considered the problem of group testing with graph constraints. As opposed to conventional group testing where any subset of items can be grouped, here a test is admissible if it induces a connected subgraph.

Graph-Constrained Group Testing

Mahdi Cheraghchi, Amin Karbasi, Soheil Mohajer, Venkatesh Saligrama

Group Testing with Probabilistic Tests: Theory, Design and Application

Mahdi Cheraghchi, Ali Hormati, Amin Karbasi, Martin Vetterli

Sequential Group Testing with Graph Constraints

Amin Karbasi, Morteza Zadimoghaddam

Compressed Sensing with Probabilistic Measurements: A Group Testing Solution

Mahdi Cheraghchi, Ali Hormati, Amin Karbasi, Martin Vetterli

Support Recovery in Compressed Sensing: An Estimation Theoretic Approach

Amin Karbasi, Ali Hormati, Soheil Mohajer, Martin Vetterli

Localization from Incomplete Information

Wireless sensor networks are currently being employed in a variety of applications ranging from medical to military, and from home to industry. In many of these applications, the location estimation of individual sensor is a necessary requirement. This problem is particularly challenging when we need to infer the positions based on local and basic information such as proximity and local distance measurements.

Robust Localization from Incomplete Local Information

Amin Karbasi, Sewoong Oh

Distributed Sensor Network Localization from Local Connectivity: Performance Analysis for the HOP-TERRAIN Algorithm

Amin Karbasi, Sewoong Oh

Sensor Network Localization from Local Connectivity: Performance Analysis for the MDS-MAP Algorithm

Sewoong Oh, Amin Karbasi and Andrea Montanari

Learning in Neural Networks

The task of a neural associative memory is to retrieve a set of previously memorized patterns from their noisy versions by using a network of neurons. This problem is in spirit closely related to reliable information transmission where the goal is to decode a set of transmitted patterns over a noisy channel. Despite the similarity and common methods deployed in both fields, we witness a huge gap in terms of their efficiency. More specifically, the number of reliably transmitted patterns over a noisy channel can be made exponential in nthe length of the patterns, by intelligently imposing redundancy among patterns. In contrast, the maximum number of patterns that can be reliably memorized by the current neural networks scales linearly in n. We observed that this drawback is due to the common assumption that a neural network should be able to memorize any subset of patterns drawn randomly from the set of all possible vectors of length n. To increase the storage capacity of neural networks beyond the current linear scaling, we developed a new variant of the problem where only a suitable set of patterns was considered for storing, i.e., the subset of patterns of length n drawn randomly from a training set of dimension k < n. By devising a novel iterative algorithm that learns the redundancy among the patterns, we showed that the resulting neural network can memorize an exponential number of patterns. Moreover, by exploiting the local structure of the resulting neural network, we showed that the asymptotic error correction performance increases from constant to linear in n.

Noise-Enhanced Associative Memories

Amin Karbasi, Amir Hessam Salavati, Amin Shokrollahi, Lav R. Varshney

Coupled Neural Associative Memories

Amin Karbasi, Amir Hessam Salavati, Amin Shokrollahi

Iterative Learning and Denoising in Convolutional Associative Memory Networks

Amin Karbasi, Amir Hessam Salavati, Amin Shokrollahi

Multi-Level Error-Resilient Neural Networks

Amir Hessam Salavati, Amin Karbasi

Discrete and Combinatorial Structures in Machine Learning and Data Mining

Submodularity is an intuitive diminishing returns property, stating that adding an element to a smaller set helps more than adding it to a larger set. Similarly to convexity, submodularity allows one to efficiently find provably near optimal solutions, even for very large data sets. We are studying submodularity and its implications for challenging inference and learning problems.

Distributed Submodular Maximization: Identifying Representative Elements in Massive Data

Baharan Mirzasoleiman, Amin Karbasi, Rik Sarkar, Andreas Krause

Calibration Through Matrix Completion

A calibration process in circular ultrasound tomography devices, where the sensor positions deviate from the circumference of a perfect circle, lies at the heart of a variety of inverse problems in signal processing such as breast cancer screening. In the presence of all the pairwise Time of Flight (ToF) measurements, one can easily estimate the sensor positions. In practice, however, we face two major sources of loss: one is due to transitional behaviour of the sensors which makes the ToFs for close-by sensors unavailable, and the other is due to the random malfunctioning of the sensors which leads to random missing ToFs. To solve the problem under this set of uncertainties, we developed a novel method of calibration based on structured matrix completion that first estimates the missing ToFs and then returns the positions.

Calibration Using Matrix Completion with Application to Ultrasound Tomography

Reza Parhizkar, Amin Karbasi, Sewoong Oh, Martin Vetterli

Low-rank Matrix Approximation Using Point-wise Operators

Arash Amini, Amin Karbasi and Farokh Marvasti

Calibration in Circular Ultrasound Tomography Devices

Reza Parhizkar, Amin Karbasi, Martin Vetterli

Ultrasound Tomography Calibration using Structured Matrix Completion

Amin Karbasi, Sewoong Oh, Reza Parhizkar , Martin Vetterli