A-NeSI: A Scalable Approximate Method for Probabilistic Neurosymbolic Inference
| Authors | |
|---|---|
| Publication date | 2023 |
| Host editors |
|
| Book title | 37th Conference on Neural Information Processing Systems (NeurIPS 2023) |
| Book subtitle | 10-16 December 2023, New Orleans, Louisana, USA |
| ISBN (electronic) |
|
| Series | Advances in Neural Information Processing Systems |
| Event | 37th Conference on Neural Information Processing Systems (NeurIPS 2023) |
| Number of pages | 24 |
| Publisher | Neural Information Processing Systems Foundation |
| Organisations |
|
| Abstract |
We study the problem of combining neural networks with symbolic reasoning. Recently introduced frameworks for Probabilistic Neurosymbolic Learning (PNL), such as DeepProbLog, perform exponential-time exact inference, limiting the scalability of PNL solutions. We introduce Approximate Neurosymbolic Inference (A-NeSI): a new framework for PNL that uses neural networks for scalable approximate inference. A-NeSI 1) performs approximate inference in polynomial time without changing the semantics of probabilistic logics; 2) is trained using data generated by the background knowledge; 3) can generate symbolic explanations of predictions; and 4) can guarantee the satisfaction of logical constraints at test time, which is vital in safety-critical applications. Our experiments show that A-NeSI is the first end-to-end method to solve three neurosymbolic tasks with exponential combinatorial scaling. Finally, our experiments show that A-NeSI achieves explainability and safety without a penalty in performance.
|
| Document type | Conference contribution |
| Note | With supplementary ZIP-file |
| Language | English |
| Published at | https://papers.nips.cc/paper_files/paper/2023/hash/4d9944ab3330fe6af8efb9260aa9f307-Abstract-Conference.html https://openreview.net/forum?id=chlTA9Cegc |
| Other links | https://doi.org/10.52202/075280 |
| Downloads |
A-NeSI
(Accepted author manuscript)
|
| Supplementary materials | |
| Permalink to this page | |