Computer Science > Computation and Language
[Submitted on 19 May 2023 (v1), last revised 23 Oct 2023 (this version, v3)]
Title:HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models
View PDFAbstract:Large language models (LLMs), such as ChatGPT, are prone to generate hallucinations, i.e., content that conflicts with the source or cannot be verified by the factual knowledge. To understand what types of content and to which extent LLMs are apt to hallucinate, we introduce the Hallucination Evaluation benchmark for Large Language Models (HaluEval), a large collection of generated and human-annotated hallucinated samples for evaluating the performance of LLMs in recognizing hallucination. To generate these samples, we propose a ChatGPT-based two-step framework, i.e., sampling-then-filtering. Besides, we also hire some human labelers to annotate the hallucinations in ChatGPT responses. The empirical results suggest that ChatGPT is likely to generate hallucinated content in specific topics by fabricating unverifiable information (i.e., about 19.5\% responses). Moreover, existing LLMs face great challenges in recognizing the hallucinations in texts. However, our experiments also prove that providing external knowledge or adding reasoning steps can help LLMs recognize hallucinations. Our benchmark can be accessed at this https URL.
Submission history
From: Junyi Li [view email][v1] Fri, 19 May 2023 15:36:27 UTC (686 KB)
[v2] Mon, 22 May 2023 13:36:09 UTC (687 KB)
[v3] Mon, 23 Oct 2023 01:49:32 UTC (689 KB)
References & Citations
export BibTeX citation
Loading...
Bibliographic and Citation Tools
Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)