Files
Zotero-Thesis/storage/S2RSQPFU/.zotero-ft-cache
fzzinchemical 02b00ee108 update
2026-01-22 22:01:07 +01:00

60 lines
2.6 KiB
Plaintext
Raw Blame History

This file contains invisible Unicode characters
This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
Skip to main content
Computer Science > Computation and Language
arXiv:2305.11747 (cs)
[Submitted on 19 May 2023 (v1), last revised 23 Oct 2023 (this version, v3)]
HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models
Junyi Li, Xiaoxue Cheng, Wayne Xin Zhao, Jian-Yun Nie, Ji-Rong Wen
View PDF
Large language models (LLMs), such as ChatGPT, are prone to generate hallucinations, i.e., content that conflicts with the source or cannot be verified by the factual knowledge. To understand what types of content and to which extent LLMs are apt to hallucinate, we introduce the Hallucination Evaluation benchmark for Large Language Models (HaluEval), a large collection of generated and human-annotated hallucinated samples for evaluating the performance of LLMs in recognizing hallucination. To generate these samples, we propose a ChatGPT-based two-step framework, i.e., sampling-then-filtering. Besides, we also hire some human labelers to annotate the hallucinations in ChatGPT responses. The empirical results suggest that ChatGPT is likely to generate hallucinated content in specific topics by fabricating unverifiable information (i.e., about 19.5\% responses). Moreover, existing LLMs face great challenges in recognizing the hallucinations in texts. However, our experiments also prove that providing external knowledge or adding reasoning steps can help LLMs recognize hallucinations. Our benchmark can be accessed at this https URL.
Comments: Accepted to EMNLP 2023 Main Conference (Long Paper)
Subjects: Computation and Language (cs.CL)
Cite as: arXiv:2305.11747 [cs.CL]
  (or arXiv:2305.11747v3 [cs.CL] for this version)
 
https://doi.org/10.48550/arXiv.2305.11747
Focus to learn more
Submission history
From: Junyi Li [view email]
[v1] Fri, 19 May 2023 15:36:27 UTC (686 KB)
[v2] Mon, 22 May 2023 13:36:09 UTC (687 KB)
[v3] Mon, 23 Oct 2023 01:49:32 UTC (689 KB)
Access Paper:
View PDFTeX Source
view license
Current browse context: cs.CL
< prev next >
newrecent2023-05
Change to browse by: cs
References & Citations
NASA ADS
Google Scholar
Semantic Scholar
Export BibTeX Citation
Bookmark
Bibliographic Tools
Bibliographic and Citation Tools
Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media
Demos
Related Papers
About arXivLabs
Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
About
Help
Contact
Subscribe
Copyright
Privacy Policy
Web Accessibility Assistance
arXiv Operational Status