J 2023

Testing of detection tools for AI-generated text

WEBER-WULFF, Debora, Alla ANOHINA-NAUMECA, Sonja BJELOBABA, Tomáš FOLTÝNEK, Jean GUERRERO-DIB et. al.

Základní údaje

Originální název

Testing of detection tools for AI-generated text

Autoři

WEBER-WULFF, Debora (276 Německo), Alla ANOHINA-NAUMECA (428 Lotyšsko), Sonja BJELOBABA (752 Švédsko), Tomáš FOLTÝNEK (203 Česká republika, garant, domácí), Jean GUERRERO-DIB (484 Mexiko), Olumide POPOOLA, Petr ŠIGUT (703 Slovensko, domácí) a Lorna WADDINGTON

Vydání

International Journal for Educational Integrity, 2023, 1833-2595

Další údaje

Jazyk

angličtina

Typ výsledku

Článek v odborném periodiku

Obor

10200 1.2 Computer and information sciences

Stát vydavatele

Německo

Utajení

není předmětem státního či obchodního tajemství

Odkazy

Impakt faktor

Impact factor: 4.600 v roce 2022

Kód RIV

RIV/00216224:14330/23:00132774

Organizační jednotka

Fakulta informatiky

UT WoS

001129231700001

Klíčová slova anglicky

Artifcial intelligence; Generative pre-trained transformers; Machine-generated text; Detection of AI-generated text; Academic integrity; ChatGPT; AI detectors

Příznaky

Mezinárodní význam, Recenzováno
Změněno: 4. 1. 2024 16:55, Mgr. Tomáš Foltýnek, Ph.D.

Anotace

V originále

Recent advances in generative pre-trained transformer large language models have emphasised the potential risks of unfair use of artifcial intelligence (AI) generated content in an academic environment and intensifed eforts in searching for solutions to detect such content. The paper examines the general functionality of detection tools for AI-generated text and evaluates them based on accuracy and error type analysis. Specifcally, the study seeks to answer research questions about whether existing detection tools can reliably diferentiate between human-written text and ChatGPTgenerated text, and whether machine translation and content obfuscation techniques afect the detection of AI-generated text. The research covers 12 publicly available tools and two commercial systems (Turnitin and PlagiarismCheck) that are widely used in the academic setting. The researchers conclude that the available detection tools are neither accurate nor reliable and have a main bias towards classifying the output as human-written rather than detecting AI-generated text. Furthermore, content obfuscation techniques signifcantly worsen the performance of tools. The study makes several signifcant contributions. First, it summarises up-to-date similar scientific and non-scientifc eforts in the feld. Second, it presents the result of one of the most comprehensive tests conducted so far, based on a rigorous research methodology, an original document set, and a broad coverage of tools. Third, it discusses the implications and drawbacks of using detection tools for AI-generated text in academic settings.