Skip to main content
arXiv logo
Cornell University Logo

Computer Science > Computation and Language

arXiv:2406.09155 (cs)
[Submitted on 13 Jun 2024]

Title:DefAn: Definitive Answer Dataset for LLMs Hallucination Evaluation

Authors:A B M Ashikur Rahman, Saeed Anwar, Muhammad Usman, Ajmal Mian
View a PDF of the paper titled DefAn: Definitive Answer Dataset for LLMs Hallucination Evaluation, by A B M Ashikur Rahman and 3 other authors
View PDF HTML (experimental)
Abstract:Large Language Models (LLMs) have demonstrated remarkable capabilities, revolutionizing the integration of AI in daily life applications. However, they are prone to hallucinations, generating claims that contradict established facts, deviating from prompts, and producing inconsistent responses when the same prompt is presented multiple times. Addressing these issues is challenging due to the lack of comprehensive and easily assessable benchmark datasets. Most existing datasets are small and rely on multiple-choice questions, which are inadequate for evaluating the generative prowess of LLMs. To measure hallucination in LLMs, this paper introduces a comprehensive benchmark dataset comprising over 75,000 prompts across eight domains. These prompts are designed to elicit definitive, concise, and informative answers. The dataset is divided into two segments: one publicly available for testing and assessing LLM performance and a hidden segment for benchmarking various LLMs. In our experiments, we tested six LLMs-GPT-3.5, LLama 2, LLama 3, Gemini, Mixtral, and Zephyr-revealing that overall factual hallucination ranges from 59% to 82% on the public dataset and 57% to 76% in the hidden benchmark. Prompt misalignment hallucination ranges from 6% to 95% in the public dataset and 17% to 94% in the hidden counterpart. Average consistency ranges from 21% to 61% and 22% to 63%, respectively. Domain-wise analysis shows that LLM performance significantly deteriorates when asked for specific numeric information while performing moderately with person, location, and date queries. Our dataset demonstrates its efficacy and serves as a comprehensive benchmark for LLM performance evaluation. Our dataset and LLMs responses are available at \href{this https URL}{this https URL}.
Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
Cite as: arXiv:2406.09155 [cs.CL]
  (or arXiv:2406.09155v1 [cs.CL] for this version)
  https://doi.org/10.48550/arXiv.2406.09155
arXiv-issued DOI via DataCite

Submission history

From: Saeed Anwar [view email]
[v1] Thu, 13 Jun 2024 14:18:13 UTC (2,046 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled DefAn: Definitive Answer Dataset for LLMs Hallucination Evaluation, by A B M Ashikur Rahman and 3 other authors
  • View PDF
  • HTML (experimental)
  • TeX Source
license icon view license
Current browse context:
cs.CL
< prev   |   next >
new | recent | 2024-06
Change to browse by:
cs
cs.AI
cs.CV
cs.LG

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
export BibTeX citation Loading...

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)
Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status