Title: Decentralized Submodular Maximization: Bridging Discrete and Continuous Settings.

Authors: Aryan Mokhtari, Hamed Hassani, Amin Karbasi:


In this paper, we showcase the interplay between discrete and continuous optimization in network-structured settings. We propose the first fully decentralized optimization method for a wide class of non-convex objective functions that possess a diminishing returns property. More specifically, given an arbitrary connected network and a global continuous submodular function, formed by a sum of local functions, we develop Decentralized Continuous Greedy (DCG), a message passing algorithm that converges to the tight (1−1/e) approximation factor of the optimum global solution using only local computation and communication. We also provide strong convergence bounds as a function of network size and spectral characteristics of the underlying topology. Interestingly, DCG readily provides a simple recipe for decentralized discrete submodular maximization through the means of continuous relaxations. Formally, we demonstrate that by lifting the local discrete functions to continuous domains and using DCG as an interface we can develop a consensus algorithm that also achieves the tight (1−1/e) approximation guarantee of the global discrete solution once a proper rounding scheme is applied.

Full Text: [PDF]

Accessibility at Yale   Inference, Information, and Decision Group at Yale