Causal Bandits without prior knowledge using separating sets

Open Access
Authors
Publication date 2022
Journal Proceedings of Machine Learning Research
Event 1st Conference on Causal Learning and Reasoning
Volume | Issue number 177
Pages (from-to) 407-427
Number of pages 21
Organisations
  • Faculty of Science (FNWI) - Korteweg-de Vries Institute for Mathematics (KdVI)
Abstract
The Causal Bandit is a variant of the classic Bandit problem where an agent must identify the best action in a sequential decision-making process, where the reward distribution of the actions displays a non-trivial dependence structure that is governed by a causal model. All methods proposed for this problem thus far in the literature rely on exact prior knowledge of the full causal. We formulate new causal bandit algorithms that no longer necessarily rely on prior causal knowledge. Instead, they utilize an estimator based on separating sets, which we can find using simple conditional independence tests or causal discovery methods. We show that, for discrete i.i.d. data, this estimator is unbiased, and has variance which is upper bounded by that of the sample mean. We develop algorithms based on Thompson Sampling and UCB for discrete and Gaussian models respectively and show increased performance on simulation data as well as on a bandit drawing from real-world protein signaling data.
Document type Article
Note Proceedings of the First Conference on Causal Learning and Reasoning, 11-13 April 2022, Sequoia Conference Center, Eureka, CA, USA
Language English
Published at https://proceedings.mlr.press/v177/kroon22a.html
Downloads
kroon22a (Final published version)
Permalink to this page
Back