Using Explainable Techniques to Enhance Chain of Thoughts in LLMs

Open Access
Authors
Publication date 2025
Book title 2025 IEEE International Conference on eScience : eScience 2025
Book subtitle 15-18 September 2025, Chicago, United States : proceedings
ISBN
  • 9798331591465
ISBN (electronic)
  • 9798331591458
Event 21st IEEE International Conference on e-Science, eScience 2025
Pages (from-to) 323-324
Number of pages 2
Publisher Los Alamitos, California: IEEE Computer Society
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract

With the recent advancements in reasoning capabilities with semantic understanding, Large Language Models (LLMs) are increasingly mimicking human reasoning processes and aligning more closely with human values. Yet, unlike how humans sometimes make decisions intuitively by reasoning or understanding the underlying logic, LLMs struggle with transparency in their internal reasoning. Similar to how humans often pause to analyze their reasoning step-by-step to understand their own decisions better, we propose enabling LLMs to perform analogous introspective reasoning using a Chain-of-Thought (CoT) framework powered by SHAP-based explainability. CoT method guides LLMs to explicitly render intermediate reasoning steps, effectively simulating a human's reflective thought process. Meanwhile, SHAP explainability provides context or logic by quantifying how each semantic token or feature contributes to the reasoning, analogous to a human examining which factors most influenced their choices. Our analysis explores how various training methods influence their semantic comprehension and, consequently, their ability to communicate their reasoning transparently. We apply this approach to a specialized environmental and Earth science ranking dataset - characterized by user feedback and sparse annotations - and we assess whether LLM-generated rankings can not only reflect accurate semantic understanding but also transparently emulate human-like introspection.

Document type Conference contribution
Language English
Published at https://doi.org/10.1109/eScience65000.2025.00052
Other links https://www.proceedings.com/82446.html https://www.scopus.com/pages/publications/105019536125
Downloads
Permalink to this page
Back