The role of automated evaluation techniques in online professional translator training
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216275%3A25410%2F21%3A39917737" target="_blank" >RIV/00216275:25410/21:39917737 - isvavai.cz</a>
Result on the web
<a href="https://peerj.com/articles/cs-706/" target="_blank" >https://peerj.com/articles/cs-706/</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.7717/peerj-cs.706" target="_blank" >10.7717/peerj-cs.706</a>
Alternative languages
Result language
angličtina
Original language name
The role of automated evaluation techniques in online professional translator training
Original language description
The rapid technologisation of translation has influenced the translation industry's direction towards machine translation, post-editing, subtitling services and video content translation. Besides, the pandemic situation associated with COVID-19 has rapidly increased the transfer of business and education to the virtual world. This situation has motivated us not only to look for new approaches to online translator training, which requires a different method than learning foreign languages but in particular to look for new approaches to assess translator performance within online educational environments. Translation quality assessment is a key task, as the concept of quality is closely linked to the concept of optimization. Automatic metrics are very good indicators of quality, but they do not provide sufficient and detailed linguistic information about translations or post-edited machine translations. However, using their residuals, we can identify the segments with the largest distances between the post-edited machine translations and machine translations, which allow us to focus on a more detailed textual analysis of suspicious segments. We introduce a unique online teaching and learning system, which is specifically "tailored" for online translators' training and subsequently we focus on a new approach to assess translators' competences using evaluation techniques-the metrics of automatic evaluation and their residuals. We show that the residuals of the metrics of accuracy (BLEU_n) and error rate (PER, WER, TER, CDER, and HTER) for machine translation post-editing are valid for translator assessment. Using the residuals of the metrics of accuracy and error rate, we can identify errors in postediting (critical, major, and minor) and subsequently utilize them in more detailed linguistic analysis.
Czech name
—
Czech description
—
Classification
Type
J<sub>imp</sub> - Article in a specialist periodical, which is included in the Web of Science database
CEP classification
—
OECD FORD branch
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Result continuities
Project
<a href="/en/project/GA19-15498S" target="_blank" >GA19-15498S: Modelling emotions in verbal and nonverbal managerial communication to predict corporate financial risk</a><br>
Continuities
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Others
Publication year
2021
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Name of the periodical
PeerJ Computer Science
ISSN
2376-5992
e-ISSN
—
Volume of the periodical
7
Issue of the periodical within the volume
4.10.2021
Country of publishing house
GB - UNITED KINGDOM
Number of pages
27
Pages from-to
"e706"
UT code for WoS article
000703684200001
EID of the result in the Scopus database
2-s2.0-85117922663