CORE: Towards Scalable and Efficient Causal Discovery with Reinforcement Learning

Open Access
Authors
Publication date 2024
Book title AAMAS '24
Book subtitle Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems : May 6-10, 2024, Auckland, New Zealand
ISBN (electronic)
  • 9798400704864
Event 23rd International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2024
Pages (from-to) 1664-1672
Publisher Richland, SC: International Foundation for Autonomous Agents and Multiagent Systems
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Causal discovery is the challenging task of inferring causal structure from data. Motivated by Pearl's Causal Hierarchy (PCH), which tells us that passive observations alone are not enough to distinguish correlation from causation, there has been a recent push to incorporate interventions into machine learning research. Reinforcement learning provides a convenient framework for such an active approach to learning. This paper presents CORE, a deep reinforcement learning-based approach for causal discovery and intervention planning. CORE learns to sequentially reconstruct causal graphs from data while learning to perform informative interventions. Our results demonstrate that CORE generalizes to unseen graphs and efficiently uncovers causal structures. Furthermore, CORE scales to larger graphs with up to 10 variables and outperforms existing approaches in structure estimation accuracy and sample efficiency. All relevant code and supplementary material can be found at https://github.com/sa-and/CORE.
Document type Conference contribution
Language English
Published at https://www.ifaamas.org/Proceedings/aamas2024/pdfs/p1664.pdf https://dl.acm.org/doi/10.5555/3635637.3663027
Other links https://github.com/sa-and/CORE
Downloads
p1664 (Final published version)
Permalink to this page
Back