# Abstract

In this paper, we consider an online optimization process, where the objective functions are not convex (nor concave) but instead belong to a broad class of continuous submodular functions. We first propose a variant of the Frank-Wolfe algorithm that has access to the full gradient of the objective functions. We √ against a (1 − 1/e)-approximation to the best feasible solution in hindsight. However, in many scenarios, T ) (where T is the horizon of the online optimization problem) only an unbiased estimate of the gradients are available. For such settings, we then propose an online show that it achieves a regret bound of $O(\sqrt{T})$ stochastic gradient ascent algorithm that also achieves a regret bound of O( $\sqrt{T}$) regret, albeit against a weaker 1/2-approximation to the best feasible solution in hindsight. We also generalize our results to γ-weakly submodular functions and prove the same sublinear regret bounds. Finally, we demonstrate the efficiency of our algorithms on a few problem instances, including non-convex/non-concave quadratic programs, multilinear extensions of submodular set functions, and D-optimal design.

## Full Text: [PDF]

Accessibility at Yale   Inference, Information, and Decision Group at Yale