Comprehensive Multiparametric Analysis of Human Deepfake Speech Recognition
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216305%3A26230%2F24%3APU151920" target="_blank" >RIV/00216305:26230/24:PU151920 - isvavai.cz</a>
Výsledek na webu
<a href="https://jivp-eurasipjournals.springeropen.com/articles/10.1186/s13640-024-00641-4" target="_blank" >https://jivp-eurasipjournals.springeropen.com/articles/10.1186/s13640-024-00641-4</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1186/s13640-024-00641-4" target="_blank" >10.1186/s13640-024-00641-4</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Comprehensive Multiparametric Analysis of Human Deepfake Speech Recognition
Popis výsledku v původním jazyce
In this paper, we undertake a novel two-pronged investigation into the human recognition of deepfake speech, addressing critical gaps in existing research. First, we pioneer an evaluation of the impact of prior information on deepfake recognition, setting our work apart by simulating real-world attack scenarios where individuals are not informed in advance of deepfake exposure. This approach simulates the unpredictability of real-world deepfake attacks, providing unprecedented insights into human vulnerability under realistic conditions. Second, we introduce a novel metric to evaluate the quality of deepfake audio. This metric facilitates a deeper exploration into how the quality of deepfake speech influences human detection accuracy. By examining both the effect of prior knowledge about deepfakes and the role of deepfake speech quality, our research reveals the importance of these factors, contributes to understanding human vulnerability to deepfakes, and suggests measures to enhance human detection skills.
Název v anglickém jazyce
Comprehensive Multiparametric Analysis of Human Deepfake Speech Recognition
Popis výsledku anglicky
In this paper, we undertake a novel two-pronged investigation into the human recognition of deepfake speech, addressing critical gaps in existing research. First, we pioneer an evaluation of the impact of prior information on deepfake recognition, setting our work apart by simulating real-world attack scenarios where individuals are not informed in advance of deepfake exposure. This approach simulates the unpredictability of real-world deepfake attacks, providing unprecedented insights into human vulnerability under realistic conditions. Second, we introduce a novel metric to evaluate the quality of deepfake audio. This metric facilitates a deeper exploration into how the quality of deepfake speech influences human detection accuracy. By examining both the effect of prior knowledge about deepfakes and the role of deepfake speech quality, our research reveals the importance of these factors, contributes to understanding human vulnerability to deepfakes, and suggests measures to enhance human detection skills.
Klasifikace
Druh
J<sub>imp</sub> - Článek v periodiku v databázi Web of Science
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
—
Návaznosti
S - Specificky vyzkum na vysokych skolach
Ostatní
Rok uplatnění
2024
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název periodika
Eurasip Journal on Image and Video Processing
ISSN
1687-5176
e-ISSN
1687-5281
Svazek periodika
2024
Číslo periodika v rámci svazku
24
Stát vydavatele periodika
US - Spojené státy americké
Počet stran výsledku
25
Strana od-do
1-25
Kód UT WoS článku
001302501400001
EID výsledku v databázi Scopus
2-s2.0-85202737368