Security Implications of Deepfakes in Face Authentication
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216305%3A26230%2F24%3APU150904" target="_blank" >RIV/00216305:26230/24:PU150904 - isvavai.cz</a>
Výsledek na webu
<a href="https://dl.acm.org/doi/10.1145/3605098.3635953" target="_blank" >https://dl.acm.org/doi/10.1145/3605098.3635953</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1145/3605098.3635953" target="_blank" >10.1145/3605098.3635953</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Security Implications of Deepfakes in Face Authentication
Popis výsledku v původním jazyce
Deepfakes are media generated by deep learning and are nearly indistinguishable from real content to humans. Deepfakes have seen a significant surge in popularity in recent years. There have been numerous papers discussing their effectiveness in deceiving people. What's equally, if not more concerning, is the potential vulnerability of facial and voice recognition systems to deepfakes. The misuse of deepfakes to spoof automated facial recognition systems can threaten various aspects of our lives, including financial security and access to secure locations. This issue remains largely unexplored. Thus, this paper investigates the technical feasibility of a spoofing attack on facial recognition. Firstly, we perform a threat analysis to understand what facial recognition use cases allow the execution of deepfake spoofing attacks. Based on this analysis, we define the attacker model for these attacks on facial recognition systems. Then, we demonstrate the ability of deepfakes to spoof two commercial facial recognition systems. Finally, we discuss possible means to prevent such spoofing attacks.
Název v anglickém jazyce
Security Implications of Deepfakes in Face Authentication
Popis výsledku anglicky
Deepfakes are media generated by deep learning and are nearly indistinguishable from real content to humans. Deepfakes have seen a significant surge in popularity in recent years. There have been numerous papers discussing their effectiveness in deceiving people. What's equally, if not more concerning, is the potential vulnerability of facial and voice recognition systems to deepfakes. The misuse of deepfakes to spoof automated facial recognition systems can threaten various aspects of our lives, including financial security and access to secure locations. This issue remains largely unexplored. Thus, this paper investigates the technical feasibility of a spoofing attack on facial recognition. Firstly, we perform a threat analysis to understand what facial recognition use cases allow the execution of deepfake spoofing attacks. Based on this analysis, we define the attacker model for these attacks on facial recognition systems. Then, we demonstrate the ability of deepfakes to spoof two commercial facial recognition systems. Finally, we discuss possible means to prevent such spoofing attacks.
Klasifikace
Druh
D - Stať ve sborníku
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
—
Návaznosti
S - Specificky vyzkum na vysokych skolach
Ostatní
Rok uplatnění
2024
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název statě ve sborníku
Proceedings of the ACM Symposium on Applied Computing
ISBN
979-8-4007-0243-3
ISSN
—
e-ISSN
—
Počet stran výsledku
9
Strana od-do
1376-1384
Název nakladatele
Association for Computing Machinery
Místo vydání
Avila
Místo konání akce
Avila
Datum konání akce
8. 4. 2024
Typ akce podle státní příslušnosti
WRD - Celosvětová akce
Kód UT WoS článku
001236958200199