Between Stochastic and Adversarial Online Convex Optimization Improved Regret Bounds via Smoothness

Open Access
Authors
Publication date 2023
Host editors
  • S. Koyejo
  • S. Mohamed
  • A. Agarwal
  • D. Belgrave
  • K. Cho
  • A. Oh
Book title 36th Conference on Neural Information Processing Systems (NeurIPS 2022)
Book subtitle New Orleans, Louisiana, USA, 28 November-9 December 2022
ISBN
  • 9781713871088
ISBN (electronic)
  • 9781713873129
Series Advances in Neural Information Processing Systems
Event 36th Conference on Neural Information Processing Systems, NeurIPS 2022
Volume | Issue number 2
Pages (from-to) 691-702
Number of pages 12
Publisher San Diego, CA: Neural Information Processing Systems Foundation
Organisations
  • Faculty of Science (FNWI) - Korteweg-de Vries Institute for Mathematics (KdVI)
Abstract

Stochastic and adversarial data are two widely studied settings in online learning. But many optimization tasks are neither i.i.d. nor fully adversarial, which makes it of fundamental interest to get a better theoretical understanding of the world between these extremes. In this work, we establish novel regret bounds for online convex optimization in a setting that interpolates between stochastic i.i.d. and fully adversarial losses. By exploiting smoothness of the expected losses, these bounds replace dependence on the maximum gradient length by the variance of the gradients, which was previously known only for linear losses. In addition, they weaken the i.i.d. assumption by allowing, for example, adversarially poisoned rounds, which were previously considered in the expert and bandit setting. Our results extend this to the online convex optimization framework. In the fully i.i.d. case, our bounds match the rates one would expect from results in stochastic acceleration, and in the fully adversarial case, they gracefully deteriorate to match the minimax regret. We further provide lower bounds showing that our regret upper bounds are tight for all intermediate regimes in terms of the stochastic variance and the adversarial variation of the loss gradients.

Document type Conference contribution
Note With supplemental file
Language English
Published at https://papers.nips.cc/paper_files/paper/2022/hash/047aa59e51e3ac7a2422a55468feefd5-Abstract-Conference.html
Other links https://www.scopus.com/pages/publications/85148757829
Downloads
Supplementary materials
Permalink to this page
Back