Title: Submodular maximization with cardinality constraints

Authors: N. Buchbinder, M. Feldman, J. Naor, and R. Schwartz

Abstract

We consider the problem of maximizing a (non-monotone) submodular function subject to a cardi- nality constraint. In addition to capturing well-known combinatorial optimization problems, e.g., Max- k-Coverage and Max-Bisection, this problem has applications in other more practical settings such as natural language processing, information retrieval, and machine learning. In this work we present im- proved approximations for two variants of the cardinality constraint for non-monotone functions. When at most k elements can be chosen, we improve the current best 1/e − o(1) approximation to a factor that is in the range [1/e + 0.004, 1/2], achieving a tight approximation of 1/2 − o(1) for k = n/2 and breaking the 1/e barrier for all values of k. When exactly k elements must be chosen, our algorithms improve the current best 1/4 − o(1) approximation to a factor that is in the range [0.356, 1/2], again achieving a tight approximation of 1/2 − o(1) for k = n/2. Additionally, some of the algorithms we provide are very fast with time complexities of O(nk), as opposed to previous known algorithms which are continuous in nature, and thus, too slow for applications in the practical settings mentioned above. Our algorithms are based on two new techniques. First, we present a simple randomized greedy approach where in each step a random element is chosen from a set of “reasonably good” elements. This approach might be considered a natural substitute for the greedy algorithm of Nemhauser, Wolsey and Fisher [46], as it retains the same tight guarantee of 1 − 1/e for monotone objectives and the same time complexity of O(nk), while giving an approximation of 1/e for general non-monotone objectives (while the greedy algorithm of Nemhauser et. al. fails to provide any constant guarantee). Second, we extend the double greedy technique, which achieves a tight 1/2 approximation for unconstrained submodular maximization, to the continuous setting. This allows us to manipulate the natural rates by which elements change, thus bounding the total number of elements chosen.

Full Text: [PDF]

Accessibility at Yale   Inference, Information, and Decision Group at Yale