Supplement - Learning Sparse Causal Models is not NP-hard
| Authors |
|
|---|---|
| Publication date | 2014 |
| Number of pages | 11 |
| Publisher | Ithaca, NY: ArXiv |
| Organisations |
|
| Abstract |
This article contains detailed proofs and additional examples related to the UAI-2013 submission `Learning Sparse Causal Models is not NP-hard'. It describes the FCI+ algorithm: a method for sound and complete causal model discovery in the presence of latent confounders and/or selection bias, that has worst case polynomial complexity of order N2(k+1) in the number of independence tests, for sparse graphs over N nodes, bounded by node degree k. The algorithm is an adaptation of the well-known FCI algorithm by (Spirtes et al., 2000) that is also sound and complete, but has worst case complexity exponential in N.
|
| Document type | Report |
| Note | 6 Nov 2014. - Proof supplement to: Claassen, T., Mooij, J.M. & Heskes, T. (2013). Learning sparse causal models is not NP-hard. In A. Nicholson & P. Smyth (Eds.), --- Uncertainty in artificial intelligence: proceedings of the twenty-ninth conference (2013): July 12-14, 2013, Bellevue, Washington, United States --- (pp. 172-181). Corvallis, Oregon: AUAI Press. |
| Language | English |
| Published at | http://arxiv.org/abs/1411.1557 |
| Downloads |
1411.1557v1.pd
(Final published version)
|
| Permalink to this page | |
