Skip to main content
arXiv logo
Cornell University Logo

Computer Science > Computation and Language

arXiv:2511.17069 (cs)
[Submitted on 21 Nov 2025]

Title:Principled Design of Interpretable Automated Scoring for Large-Scale Educational Assessments

Authors:Yunsung Kim, Mike Hardy, Joseph Tey, Candace Thille, Chris Piech
View a PDF of the paper titled Principled Design of Interpretable Automated Scoring for Large-Scale Educational Assessments, by Yunsung Kim and 4 other authors
View PDF HTML (experimental)
Abstract:AI-driven automated scoring systems offer scalable and efficient means of evaluating complex student-generated responses. Yet, despite increasing demand for transparency and interpretability, the field has yet to develop a widely accepted solution for interpretable automated scoring to be used in large-scale real-world assessments. This work takes a principled approach to address this challenge. We analyze the needs and potential benefits of interpretable automated scoring for various assessment stakeholders and develop four principles of interpretability -- Faithfulness, Groundedness, Traceability, and Interchangeability (FGTI) -- targeted at those needs. To illustrate the feasibility of implementing these principles, we develop the AnalyticScore framework for short answer scoring as a baseline reference framework for future research. AnalyticScore operates by (1) extracting explicitly identifiable elements of the responses, (2) featurizing each response into human-interpretable values using LLMs, and (3) applying an intuitive ordinal logistic regression model for scoring. In terms of scoring accuracy, AnalyticScore outperforms many uninterpretable scoring methods, and is within only 0.06 QWK of the uninterpretable SOTA on average across 10 items from the ASAP-SAS dataset. By comparing against human annotators conducting the same featurization task, we further demonstrate that the featurization behavior of AnalyticScore aligns well with that of humans.
Comments: 16 pages, 2 figures
Subjects: Computation and Language (cs.CL)
Cite as: arXiv:2511.17069 [cs.CL]
  (or arXiv:2511.17069v1 [cs.CL] for this version)
  https://doi.org/10.48550/arXiv.2511.17069
arXiv-issued DOI via DataCite (pending registration)

Submission history

From: Yunsung Kim [view email]
[v1] Fri, 21 Nov 2025 09:19:05 UTC (183 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled Principled Design of Interpretable Automated Scoring for Large-Scale Educational Assessments, by Yunsung Kim and 4 other authors
  • View PDF
  • HTML (experimental)
  • TeX Source
license icon view license
Current browse context:
cs.CL
< prev   |   next >
new | recent | 2025-11
Change to browse by:
cs

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
export BibTeX citation Loading...

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)
Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status