Vše

Co hledáte?

Vše
Projekty
Výsledky výzkumu
Subjekty

Rychlé hledání

  • Projekty podpořené TA ČR
  • Významné projekty
  • Projekty s nejvyšší státní podporou
  • Aktuálně běžící projekty

Chytré vyhledávání

  • Takto najdu konkrétní +slovo
  • Takto z výsledků -slovo zcela vynechám
  • “Takto můžu najít celou frázi”

Testing of detection tools for AI-generated text

Identifikátory výsledku

  • Kód výsledku v IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216224%3A14330%2F23%3A00132774" target="_blank" >RIV/00216224:14330/23:00132774 - isvavai.cz</a>

  • Výsledek na webu

    <a href="https://link.springer.com/article/10.1007/s40979-023-00146-z" target="_blank" >https://link.springer.com/article/10.1007/s40979-023-00146-z</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.1007/s40979-023-00146-z" target="_blank" >10.1007/s40979-023-00146-z</a>

Alternativní jazyky

  • Jazyk výsledku

    angličtina

  • Název v původním jazyce

    Testing of detection tools for AI-generated text

  • Popis výsledku v původním jazyce

    Recent advances in generative pre-trained transformer large language models have emphasised the potential risks of unfair use of artifcial intelligence (AI) generated content in an academic environment and intensifed eforts in searching for solutions to detect such content. The paper examines the general functionality of detection tools for AI-generated text and evaluates them based on accuracy and error type analysis. Specifcally, the study seeks to answer research questions about whether existing detection tools can reliably diferentiate between human-written text and ChatGPTgenerated text, and whether machine translation and content obfuscation techniques afect the detection of AI-generated text. The research covers 12 publicly available tools and two commercial systems (Turnitin and PlagiarismCheck) that are widely used in the academic setting. The researchers conclude that the available detection tools are neither accurate nor reliable and have a main bias towards classifying the output as human-written rather than detecting AI-generated text. Furthermore, content obfuscation techniques signifcantly worsen the performance of tools. The study makes several signifcant contributions. First, it summarises up-to-date similar scientific and non-scientifc eforts in the feld. Second, it presents the result of one of the most comprehensive tests conducted so far, based on a rigorous research methodology, an original document set, and a broad coverage of tools. Third, it discusses the implications and drawbacks of using detection tools for AI-generated text in academic settings.

  • Název v anglickém jazyce

    Testing of detection tools for AI-generated text

  • Popis výsledku anglicky

    Recent advances in generative pre-trained transformer large language models have emphasised the potential risks of unfair use of artifcial intelligence (AI) generated content in an academic environment and intensifed eforts in searching for solutions to detect such content. The paper examines the general functionality of detection tools for AI-generated text and evaluates them based on accuracy and error type analysis. Specifcally, the study seeks to answer research questions about whether existing detection tools can reliably diferentiate between human-written text and ChatGPTgenerated text, and whether machine translation and content obfuscation techniques afect the detection of AI-generated text. The research covers 12 publicly available tools and two commercial systems (Turnitin and PlagiarismCheck) that are widely used in the academic setting. The researchers conclude that the available detection tools are neither accurate nor reliable and have a main bias towards classifying the output as human-written rather than detecting AI-generated text. Furthermore, content obfuscation techniques signifcantly worsen the performance of tools. The study makes several signifcant contributions. First, it summarises up-to-date similar scientific and non-scientifc eforts in the feld. Second, it presents the result of one of the most comprehensive tests conducted so far, based on a rigorous research methodology, an original document set, and a broad coverage of tools. Third, it discusses the implications and drawbacks of using detection tools for AI-generated text in academic settings.

Klasifikace

  • Druh

    J<sub>imp</sub> - Článek v periodiku v databázi Web of Science

  • CEP obor

  • OECD FORD obor

    10200 - Computer and information sciences

Návaznosti výsledku

  • Projekt

  • Návaznosti

    I - Institucionalni podpora na dlouhodoby koncepcni rozvoj vyzkumne organizace

Ostatní

  • Rok uplatnění

    2023

  • Kód důvěrnosti údajů

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Údaje specifické pro druh výsledku

  • Název periodika

    International Journal for Educational Integrity

  • ISSN

    1833-2595

  • e-ISSN

  • Svazek periodika

    19

  • Číslo periodika v rámci svazku

    26

  • Stát vydavatele periodika

    DE - Spolková republika Německo

  • Počet stran výsledku

    39

  • Strana od-do

    1-39

  • Kód UT WoS článku

    001129231700001

  • EID výsledku v databázi Scopus

    2-s2.0-85180443619