Dynamic Regret Bounds for Online Nonconvex Optimization

Publication Date: September 30, 2022

Julie Mulvaney-Kemp, SangWoo Park, Ming Jin, and Javad Lavaei, Dynamic Regret Bounds for Online Nonconvex Optimization, to appear in IEEE Transactions on Control of Network Systems, 2022.


Abstract Online optimization problems are well-understood in the convex case, where algorithmic performance is typically measured relative to the best fixed decision. In this paper, we shed light on online nonconvex optimization problems in which algorithms are evaluated against the optimal decision at each time using the more useful notion of dynamic regret. The focus is on loss functions which are arbitrarily nonconvex, but have global solutions that are slowly time-varying. We address this problem by first analyzing the region around the global solution at each time to define time-varying target sets, which contain the global solution and exhibit desirable properties under the projected gradient descent algorithm. All points in a target set satisfy the proximal Polyak-Łojasiewicz inequality, among other conditions. Then, we introduce two algorithms and prove that the dynamic regret for each algorithm is bounded by a function of the temporal variation in the optimal decision. The first algorithm assumes that the decision maker has some prior knowledge about the initial objective function and may query the gradient repeatedly at each time. This algorithm ensures that decisions are within the target set at every time. The second algorithm makes no assumption about prior knowledge. It instead relies on random sampling and memory to find and then track the target sets over time. In this case, the landscape of the loss functions determines the likelihood that the dynamic regret will be small. Numerical experiments validate these theoretical results and highlight the impact of a single low-complexity problem early in the sequence.