Robust Online Convex Optimization in the Presence of Outliers

Open Access
Authors
Publication date 2021
Journal Proceedings of Machine Learning Research
Event 34th Conference on Learning Theory
Volume | Issue number 134
Pages (from-to) 4174-4194
Organisations
  • Faculty of Science (FNWI) - Korteweg-de Vries Institute for Mathematics (KdVI)
Abstract
We consider online convex optimization when a number k of data points are outliers that may be corrupted. We model this by introducing the notion of robust regret, which measures the regret only on rounds that are not outliers. The aim for the learner is to achieve small robust regret, without knowing where the outliers are. If the outliers are chosen adversarially, we show that a simple filtering strategy on extreme gradients incurs O(k) overhead compared to the usual regret bounds, and that this is unimprovable, which means that k needs to be sublinear in the number of rounds. We further ask which additional assumptions would allow for a linear number of outliers. It turns out that the usual benign cases of independently, identically distributed (i.i.d.) observations or strongly convex losses are not sufficient. However, combining i.i.d. observations with the assumption that outliers are those observations that are in an extreme quantile of the distribution, does lead to sublinear robust regret, even though the expected number of outliers is linear.
Document type Article
Note Proceedings of Thirty Fourth Conference on Learning Theory, 15-19 August 2021, Boulder, Colorado, USA
Language English
Published at https://proceedings.mlr.press/v134/vanerven21a.html
Downloads
vanerven21a (Final published version)
Permalink to this page
Back