Applying large language models for automated essay scoring for non-native Japanese
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F25%3A3AQKM2AE" target="_blank" >RIV/00216208:11320/25:3AQKM2AE - isvavai.cz</a>
Result on the web
<a href="https://www.scopus.com/inward/record.uri?eid=2-s2.0-85195378013&doi=10.1057%2fs41599-024-03209-9&partnerID=40&md5=2df4ec39be1d1b6ad1fd8d241c682779" target="_blank" >https://www.scopus.com/inward/record.uri?eid=2-s2.0-85195378013&doi=10.1057%2fs41599-024-03209-9&partnerID=40&md5=2df4ec39be1d1b6ad1fd8d241c682779</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1057/s41599-024-03209-9" target="_blank" >10.1057/s41599-024-03209-9</a>
Alternative languages
Result language
angličtina
Original language name
Applying large language models for automated essay scoring for non-native Japanese
Original language description
Recent advancements in artificial intelligence (AI) have led to an increased use of large language models (LLMs) for language assessment tasks such as automated essay scoring (AES), automated listening tests, and automated oral proficiency assessments. The application of LLMs for AES in the context of non-native Japanese, however, remains limited. This study explores the potential of LLM-based AES by comparing the efficiency of different models, i.e. two conventional machine training technology-based methods (Jess and JWriter), two LLMs (GPT and BERT), and one Japanese local LLM (Open-Calm large model). To conduct the evaluation, a dataset consisting of 1400 story-writing scripts authored by learners with 12 different first languages was used. Statistical analysis revealed that GPT-4 outperforms Jess and JWriter, BERT, and the Japanese language-specific trained Open-Calm large model in terms of annotation accuracy and predicting learning levels. Furthermore, by comparing 18 different models that utilize various prompts, the study emphasized the significance of prompts in achieving accurate and reliable evaluations using LLMs. © The Author(s) 2024.
Czech name
—
Czech description
—
Classification
Type
J<sub>SC</sub> - Article in a specialist periodical, which is included in the SCOPUS database
CEP classification
—
OECD FORD branch
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Result continuities
Project
—
Continuities
—
Others
Publication year
2024
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Name of the periodical
Humanities and Social Sciences Communications
ISSN
2662-9992
e-ISSN
—
Volume of the periodical
11
Issue of the periodical within the volume
1
Country of publishing house
US - UNITED STATES
Number of pages
16
Pages from-to
1-16
UT code for WoS article
—
EID of the result in the Scopus database
2-s2.0-85195378013