Skip to main content
arXiv logo
Cornell University Logo

Computer Science > Computation and Language

arXiv:2502.08109 (cs)
[Submitted on 12 Feb 2025]

Title:HuDEx: Integrating Hallucination Detection and Explainability for Enhancing the Reliability of LLM responses

Authors:Sujeong Lee, Hayoung Lee, Seongsoo Heo, Wonik Choi
View a PDF of the paper titled HuDEx: Integrating Hallucination Detection and Explainability for Enhancing the Reliability of LLM responses, by Sujeong Lee and 3 other authors
View PDF HTML (experimental)
Abstract:Recent advances in large language models (LLMs) have shown promising improvements, often surpassing existing methods across a wide range of downstream tasks in natural language processing. However, these models still face challenges, which may hinder their practical applicability. For example, the phenomenon of hallucination is known to compromise the reliability of LLMs, especially in fields that demand high factual precision. Current benchmarks primarily focus on hallucination detection and factuality evaluation but do not extend beyond identification. This paper proposes an explanation enhanced hallucination-detection model, coined as HuDEx, aimed at enhancing the reliability of LLM-generated responses by both detecting hallucinations and providing detailed explanations. The proposed model provides a novel approach to integrate detection with explanations, and enable both users and the LLM itself to understand and reduce errors. Our measurement results demonstrate that the proposed model surpasses larger LLMs, such as Llama3 70B and GPT-4, in hallucination detection accuracy, while maintaining reliable explanations. Furthermore, the proposed model performs well in both zero-shot and other test environments, showcasing its adaptability across diverse benchmark datasets. The proposed approach further enhances the hallucination detection research by introducing a novel approach to integrating interpretability with hallucination detection, which further enhances the performance and reliability of evaluating hallucinations in language models.
Comments: 11 pages
Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI)
Cite as: arXiv:2502.08109 [cs.CL]
  (or arXiv:2502.08109v1 [cs.CL] for this version)
  https://doi.org/10.48550/arXiv.2502.08109
arXiv-issued DOI via DataCite

Submission history

From: Sujeong Lee [view email]
[v1] Wed, 12 Feb 2025 04:17:02 UTC (880 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled HuDEx: Integrating Hallucination Detection and Explainability for Enhancing the Reliability of LLM responses, by Sujeong Lee and 3 other authors
  • View PDF
  • HTML (experimental)
  • TeX Source
view license
Current browse context:
cs.CL
< prev   |   next >
new | recent | 2025-02
Change to browse by:
cs
cs.AI

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
export BibTeX citation Loading...

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)
Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status