- Learning sparse causal models is not NP-hard
- Conference on Uncertainty in Artificial Intelligence; 29 (Bellevue, Wash.)
- Book/source title
- Uncertainty in artificial intelligence: proceedings of the twenty-ninth conference (2013): July 12-14, 2013, Bellevue, Washington, United States
- Pages (from-to)
- Corvallis, Oregon: AUAI Press
- Document type
- Conference contribution
- Faculty of Science (FNWI)
- Informatics Institute (IVI)
This paper shows that causal model discovery is not an NP-hard problem, in the sense that for sparse graphs bounded by node degree k the sound and complete causal model can be obtained in worst case order N2(k+2) independence tests, even when latent variables and selection bias may be present. We present a modification of the well-known FCI algorithm that implements the method for
an independence oracle, and suggest improvements for sample/real-world data versions. It does not contradict any known hardness results, and does not solve an NP-hard problem: it just proves that sparse causal discovery is perhaps more complicated, but not as hard as learning minimal Bayesian networks.
If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library, or send a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.