Learning sparse causal models is not NP-hard
| Authors |
|
|---|---|
| Publication date | 2013 |
| Host editors |
|
| Book title | Uncertainty in artificial intelligence: proceedings of the twenty-ninth conference (2013): July 12-14, 2013, Bellevue, Washington, United States |
| Event | Conference on Uncertainty in Artificial Intelligence; 29 (Bellevue, Wash.) |
| Pages (from-to) | 172-181 |
| Publisher | Corvallis, Oregon: AUAI Press |
| Organisations |
|
| Abstract |
This paper shows that causal model discovery is not an NP-hard problem, in the sense that for sparse graphs bounded by node degree k the sound and complete causal model can be obtained in worst case order N2(k+2) independence tests, even when latent variables and selection bias may be present. We present a modification of the well-known FCI algorithm that implements the method for
an independence oracle, and suggest improvements for sample/real-world data versions. It does not contradict any known hardness results, and does not solve an NP-hard problem: it just proves that sparse causal discovery is perhaps more complicated, but not as hard as learning minimal Bayesian networks. |
| Document type | Conference contribution |
| Language | English |
| Published at | http://auai.org/uai2013/prints/papers/121.pdf |
| Downloads |
Learning sparse causal models is not NP-hard
(Final published version)
|
| Permalink to this page | |
