- Supplement - Learning Sparse Causal Models is not NP-hard
- Number of pages
- Ithaca, NY: arXiv.org
- Document type
- Faculty of Science (FNWI)
- Informatics Institute (IVI)
This article contains detailed proofs and additional examples related to the UAI-2013 submission `Learning Sparse Causal Models is not NP-hard'. It describes the FCI+ algorithm: a method for sound and complete causal model discovery in the presence of latent confounders and/or selection bias, that has worst case polynomial complexity of order N2(k+1) in the number of independence tests, for sparse graphs over N nodes, bounded by node degree k. The algorithm is an adaptation of the well-known FCI algorithm by (Spirtes et al., 2000) that is also sound and complete, but has worst case complexity exponential in N.
- 6 Nov 2014. - Proof supplement to: Claassen, T., Mooij, J.M. & Heskes, T. (2013). Learning sparse causal models is not NP-hard. In A. Nicholson & P. Smyth (Eds.), --- Uncertainty in artificial intelligence: proceedings of the twenty-ninth conference (2013): July 12-14, 2013, Bellevue, Washington, United States --- (pp. 172-181). Corvallis, Oregon: AUAI Press.