Vše

Co hledáte?

Vše
Projekty
Výsledky výzkumu
Subjekty

Rychlé hledání

  • Projekty podpořené TA ČR
  • Významné projekty
  • Projekty s nejvyšší státní podporou
  • Aktuálně běžící projekty

Chytré vyhledávání

  • Takto najdu konkrétní +slovo
  • Takto z výsledků -slovo zcela vynechám
  • “Takto můžu najít celou frázi”

Translation Performance from the User's Perspective of Large Language Models and Neural Machine Translation Systems

Identifikátory výsledku

  • Kód výsledku v IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F23%3AULBVJH3Z" target="_blank" >RIV/00216208:11320/23:ULBVJH3Z - isvavai.cz</a>

  • Výsledek na webu

    <a href="https://www.webofscience.com/wos/woscc/summary/e0b8ef34-8e6b-412a-9b8f-87607433ed44-bb92f483/relevance/1" target="_blank" >https://www.webofscience.com/wos/woscc/summary/e0b8ef34-8e6b-412a-9b8f-87607433ed44-bb92f483/relevance/1</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.3390/info14100574" target="_blank" >10.3390/info14100574</a>

Alternativní jazyky

  • Jazyk výsledku

    angličtina

  • Název v původním jazyce

    Translation Performance from the User's Perspective of Large Language Models and Neural Machine Translation Systems

  • Popis výsledku v původním jazyce

    "The rapid global expansion of ChatGPT, which plays a crucial role in interactive knowledge sharing and translation, underscores the importance of comparative performance assessments in artificial intelligence (AI) technology. This study concentrated on this crucial issue by exploring and contrasting the translation performances of large language models (LLMs) and neural machine translation (NMT) systems. For this aim, the APIs of Google Translate, Microsoft Translator, and OpenAI's ChatGPT were utilized, leveraging parallel corpora from the Workshop on Machine Translation (WMT) 2018 and 2020 benchmarks. By applying recognized evaluation metrics such as BLEU, chrF, and TER, a comprehensive performance analysis across a variety of language pairs, translation directions, and reference token sizes was conducted. The findings reveal that while Google Translate and Microsoft Translator generally surpass ChatGPT in terms of their BLEU, chrF, and TER scores, ChatGPT exhibits superior performance in specific language pairs. Translations from non-English to English consistently yielded better results across all three systems compared with translations from English to non-English. Significantly, an improvement in translation system performance was observed as the token size increased, hinting at the potential benefits of training models on larger token sizes."

  • Název v anglickém jazyce

    Translation Performance from the User's Perspective of Large Language Models and Neural Machine Translation Systems

  • Popis výsledku anglicky

    "The rapid global expansion of ChatGPT, which plays a crucial role in interactive knowledge sharing and translation, underscores the importance of comparative performance assessments in artificial intelligence (AI) technology. This study concentrated on this crucial issue by exploring and contrasting the translation performances of large language models (LLMs) and neural machine translation (NMT) systems. For this aim, the APIs of Google Translate, Microsoft Translator, and OpenAI's ChatGPT were utilized, leveraging parallel corpora from the Workshop on Machine Translation (WMT) 2018 and 2020 benchmarks. By applying recognized evaluation metrics such as BLEU, chrF, and TER, a comprehensive performance analysis across a variety of language pairs, translation directions, and reference token sizes was conducted. The findings reveal that while Google Translate and Microsoft Translator generally surpass ChatGPT in terms of their BLEU, chrF, and TER scores, ChatGPT exhibits superior performance in specific language pairs. Translations from non-English to English consistently yielded better results across all three systems compared with translations from English to non-English. Significantly, an improvement in translation system performance was observed as the token size increased, hinting at the potential benefits of training models on larger token sizes."

Klasifikace

  • Druh

    J<sub>ost</sub> - Ostatní články v recenzovaných periodicích

  • CEP obor

  • OECD FORD obor

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Návaznosti výsledku

  • Projekt

  • Návaznosti

Ostatní

  • Rok uplatnění

    2023

  • Kód důvěrnosti údajů

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Údaje specifické pro druh výsledku

  • Název periodika

    "INFORMATION"

  • ISSN

    2078-2489

  • e-ISSN

  • Svazek periodika

    14

  • Číslo periodika v rámci svazku

    10

  • Stát vydavatele periodika

    US - Spojené státy americké

  • Počet stran výsledku

    18

  • Strana od-do

    574-591

  • Kód UT WoS článku

    001090032700001

  • EID výsledku v databázi Scopus