Skip to main content Computer Science > Computation and Language arXiv:1907.11692 (cs) [Submitted on 26 Jul 2019] RoBERTa: A Robustly Optimized BERT Pretraining Approach Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov View PDF Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes, and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results highlight the importance of previously overlooked design choices, and raise questions about the source of recently reported improvements. We release our models and code. Subjects: Computation and Language (cs.CL) Cite as: arXiv:1907.11692 [cs.CL]   (or arXiv:1907.11692v1 [cs.CL] for this version)   https://doi.org/10.48550/arXiv.1907.11692 Focus to learn more Submission history From: Myle Ott [view email] [v1] Fri, 26 Jul 2019 17:48:29 UTC (45 KB) Access Paper: View PDFTeX Source view license Current browse context: cs.CL < prev next > newrecent2019-07 Change to browse by: cs References & Citations NASA ADS Google Scholar Semantic Scholar 21 blog links (what is this?) DBLP - CS Bibliography listing | bibtex Yinhan Liu Myle Ott Naman Goyal Jingfei Du Mandar Joshi … Export BibTeX Citation Bookmark Bibliographic Tools Bibliographic and Citation Tools Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) Connected Papers Toggle Connected Papers (What is Connected Papers?) Litmaps Toggle Litmaps (What is Litmaps?) scite.ai Toggle scite Smart Citations (What are Smart Citations?) Code, Data, Media Demos Related Papers About arXivLabs Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?) About Help Contact Subscribe Copyright Privacy Policy Web Accessibility Assistance arXiv Operational Status