Skip to main content
Cornell University
We gratefully acknowledge support from
the Simons Foundation, Stockholm University, and all contributors.
Donate
arxiv logo > cs > arXiv:2107.03374

Help | Advanced Search

Computer Science > Machine Learning

(cs)
[Submitted on 7 Jul 2021 (v1), last revised 14 Jul 2021 (this version, v2)]

Title:Evaluating Large Language Models Trained on Code

Authors:Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, Wojciech Zaremba
View a PDF of the paper titled Evaluating Large Language Models Trained on Code, by Mark Chen and 57 other authors
View PDF
Abstract:We introduce Codex, a GPT language model fine-tuned on publicly available code from GitHub, and study its Python code-writing capabilities. A distinct production version of Codex powers GitHub Copilot. On HumanEval, a new evaluation set we release to measure functional correctness for synthesizing programs from docstrings, our model solves 28.8% of the problems, while GPT-3 solves 0% and GPT-J solves 11.4%. Furthermore, we find that repeated sampling from the model is a surprisingly effective strategy for producing working solutions to difficult prompts. Using this method, we solve 70.2% of our problems with 100 samples per problem. Careful investigation of our model reveals its limitations, including difficulty with docstrings describing long chains of operations and with binding operations to variables. Finally, we discuss the potential broader impacts of deploying powerful code generation technologies, covering safety, security, and economics.
Comments: corrected typos, added references, added authors, added acknowledgements
Subjects: Machine Learning (cs.LG)
Cite as: arXiv:2107.03374 [cs.LG]
  (or arXiv:2107.03374v2 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2107.03374
arXiv-issued DOI via DataCite

Submission history

From: Mark Chen [view email]
[v1] Wed, 7 Jul 2021 17:41:24 UTC (1,466 KB)
[v2] Wed, 14 Jul 2021 17:16:02 UTC (1,467 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled Evaluating Large Language Models Trained on Code, by Mark Chen and 57 other authors
  • View PDF
  • TeX Source
view license
Current browse context:
cs.LG
< prev   |   next >
new | recent | 2021-07
Change to browse by:
cs

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar

4 blog links

(what is this?)

DBLP - CS Bibliography

listing | bibtex
Heewoo Jun
Jared Kaplan
Harrison Edwards
Yuri Burda
Greg Brockman
…
export BibTeX citation Loading...

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)
Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status