Neurosymbolic knowledge engineering with natural language
| Authors | |
|---|---|
| Supervisors | |
| Cosupervisors |
|
| Award date | 30-04-2026 |
| ISBN |
|
| Number of pages | 144 |
| Organisations |
|
| Abstract |
Since the nineteen-seventies, knowledge engineering as a discipline has struggled with the implementation problem: the difficulty of translating expert knowledge expressed in natural language into a formal knowledge representation to be adopted by organizations and communities for use in automated decision making. This knowledge acquisition bottleneck remains a fundamental barrier. We argue that large language models (LLMs) provide a means to address the implementation problem, by allowing knowledge expressed in natural language to be used directly in knowledge engineering tasks, rather than having to be first formalized into a knowledge representation language. We show how classifiers-as-intensions, based on the prompting of LLMs using natural language intensional definitions of concepts and relations, can provide support for the knowledge engineering task of classification. We show that by having classifiers-as-intensions provide rationales for their classifications, we can distinguish factual errors from disagreements about the meaning of concepts and relations, yielding actionable guidance for knowledge graph refinement. One objection to this approach is that LLMs exhibit hallucination in their output, bringing into question their factuality. To address this objection, we show that LLMs are capable of accurately detecting hallucination in language model output, and that bilateral factuality evaluation provides insight into the degree and scope of inconsistency and incompleteness in an LLM's parametric knowledge. We show how bilateral factuality evaluation can then be used in the formal semantics of a paraconsistent logic to allow sound and complete neurosymbolic reasoning using such knowledge. We conclude by arguing that the implementation problem in knowledge engineering is rooted in its adherence to representationalism, and that our findings suggest that inferentialism and social externalism provide a way to reconceptualize the practice of knowledge engineering and dissolve the implementation problem, not by making LLMs reason logically, but by using logics that allow reasoning with LLMs.
|
| Document type | PhD thesis |
| Language | English |
| Downloads | |
| Permalink to this page | |
