Skip to main content
Cornell University
We gratefully acknowledge support from
the Simons Foundation, Stockholm University, and all contributors.
Donate
arxiv logo > cs > arXiv:2408.08333

Help | Advanced Search

Computer Science > Software Engineering

(cs)
[Submitted on 14 Aug 2024 (v1), last revised 8 Jul 2025 (this version, v2)]

Title:CodeMirage: Hallucinations in Code Generated by Large Language Models

Authors:Vibhor Agarwal, Yulong Pei, Salwa Alamir, Xiaomo Liu
View a PDF of the paper titled CodeMirage: Hallucinations in Code Generated by Large Language Models, by Vibhor Agarwal and 3 other authors
View PDF HTML (experimental)
Abstract:Large Language Models (LLMs) have shown promising potentials in program generation and no-code automation. However, LLMs are prone to generate hallucinations, i.e., they generate text which sounds plausible but is incorrect. Although there has been a recent surge in research on LLM hallucinations for text generation, similar hallucination phenomenon can happen in code generation. Sometimes the generated code can have syntactical or logical errors as well as more advanced issues like security vulnerabilities, memory leaks, etc. Given the wide adaptation of LLMs to enhance efficiency in code generation and development in general, it becomes imperative to investigate hallucinations in code generation. To the best of our knowledge, this is the first attempt at studying hallucinations in the code generated by LLMs. We start by introducing the code hallucination definition and a comprehensive taxonomy of code hallucination types. We propose the first benchmark CodeMirage dataset for code hallucinations. The benchmark contains 1,137 GPT-3.5 generated hallucinated code snippets for Python programming problems from two base datasets - HumanEval and MBPP. We then propose the methodology for code hallucination detection and experiment with open source LLMs such as CodeLLaMA as well as OpenAI's GPT-3.5 and GPT-4 models using one-shot prompt. We find that GPT-4 performs the best on HumanEval dataset and gives comparable results to the fine-tuned CodeBERT baseline on MBPP dataset. Towards the end, we discuss various mitigation strategies for code hallucinations and conclude our work.
Comments: Accepted at AutoMates @ IJCAI 2024
Subjects: Software Engineering (cs.SE); Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
Cite as: arXiv:2408.08333 [cs.SE]
  (or arXiv:2408.08333v2 [cs.SE] for this version)
  https://doi.org/10.48550/arXiv.2408.08333
arXiv-issued DOI via DataCite

Submission history

From: Vibhor Agarwal [view email]
[v1] Wed, 14 Aug 2024 22:53:07 UTC (85 KB)
[v2] Tue, 8 Jul 2025 23:14:43 UTC (28 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled CodeMirage: Hallucinations in Code Generated by Large Language Models, by Vibhor Agarwal and 3 other authors
  • View PDF
  • HTML (experimental)
  • TeX Source
license icon view license
Current browse context:
cs.SE
< prev   |   next >
new | recent | 2024-08
Change to browse by:
cs
cs.AI
cs.CL

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
export BibTeX citation Loading...

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)
Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status