Vše

Co hledáte?

Vše
Projekty
Výsledky výzkumu
Subjekty

Rychlé hledání

  • Projekty podpořené TA ČR
  • Významné projekty
  • Projekty s nejvyšší státní podporou
  • Aktuálně běžící projekty

Chytré vyhledávání

  • Takto najdu konkrétní +slovo
  • Takto z výsledků -slovo zcela vynechám
  • “Takto můžu najít celou frázi”

Towards a Robust Deep Neural Network Against Adversarial Texts: A Survey

Identifikátory výsledku

  • Kód výsledku v IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F23%3AHZUTKZCW" target="_blank" >RIV/00216208:11320/23:HZUTKZCW - isvavai.cz</a>

  • Výsledek na webu

    <a href="https://www.scopus.com/inward/record.uri?eid=2-s2.0-85118115176&doi=10.1109%2fTKDE.2021.3117608&partnerID=40&md5=49d7f261bcfc6933213668123cfc6c27" target="_blank" >https://www.scopus.com/inward/record.uri?eid=2-s2.0-85118115176&doi=10.1109%2fTKDE.2021.3117608&partnerID=40&md5=49d7f261bcfc6933213668123cfc6c27</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.1109/tkde.2021.3117608" target="_blank" >10.1109/tkde.2021.3117608</a>

Alternativní jazyky

  • Jazyk výsledku

    angličtina

  • Název v původním jazyce

    Towards a Robust Deep Neural Network Against Adversarial Texts: A Survey

  • Popis výsledku v původním jazyce

    "Deep neural networks (DNNs) have achieved remarkable success in various tasks (e.g., image classification, speech recognition, and natural language processing (NLP)). However, researchers have demonstrated that DNN-based models are vulnerable to adversarial examples, which cause erroneous predictions by adding imperceptible perturbations into legitimate inputs. Recently, studies have revealed adversarial examples in the text domain, which could effectively evade various DNN-based text analyzers and further bring the threats of the proliferation of disinformation. In this paper, we give a comprehensive survey on the existing studies of adversarial techniques for generating adversarial texts written by both English and Chinese characters and the corresponding defense methods. More importantly, we hope that our work could inspire future studies to develop more robust DNN-based text analyzers against known and unknown adversarial techniques. We classify the existing adversarial techniques for crafting adversarial texts based on the perturbation units, helping to better understand the generation of adversarial texts and build robust models for defense. In presenting the taxonomy of adversarial attacks and defenses in the text domain, we introduce the adversarial techniques from the perspective of different NLP tasks. Finally, we discuss the existing challenges of adversarial attacks and defenses in texts and present the future research directions in this emerging and challenging field. © 1989-2012 IEEE."

  • Název v anglickém jazyce

    Towards a Robust Deep Neural Network Against Adversarial Texts: A Survey

  • Popis výsledku anglicky

    "Deep neural networks (DNNs) have achieved remarkable success in various tasks (e.g., image classification, speech recognition, and natural language processing (NLP)). However, researchers have demonstrated that DNN-based models are vulnerable to adversarial examples, which cause erroneous predictions by adding imperceptible perturbations into legitimate inputs. Recently, studies have revealed adversarial examples in the text domain, which could effectively evade various DNN-based text analyzers and further bring the threats of the proliferation of disinformation. In this paper, we give a comprehensive survey on the existing studies of adversarial techniques for generating adversarial texts written by both English and Chinese characters and the corresponding defense methods. More importantly, we hope that our work could inspire future studies to develop more robust DNN-based text analyzers against known and unknown adversarial techniques. We classify the existing adversarial techniques for crafting adversarial texts based on the perturbation units, helping to better understand the generation of adversarial texts and build robust models for defense. In presenting the taxonomy of adversarial attacks and defenses in the text domain, we introduce the adversarial techniques from the perspective of different NLP tasks. Finally, we discuss the existing challenges of adversarial attacks and defenses in texts and present the future research directions in this emerging and challenging field. © 1989-2012 IEEE."

Klasifikace

  • Druh

    J<sub>SC</sub> - Článek v periodiku v databázi SCOPUS

  • CEP obor

  • OECD FORD obor

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Návaznosti výsledku

  • Projekt

  • Návaznosti

Ostatní

  • Rok uplatnění

    2023

  • Kód důvěrnosti údajů

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Údaje specifické pro druh výsledku

  • Název periodika

    "IEEE Transactions on Knowledge and Data Engineering"

  • ISSN

    1041-4347

  • e-ISSN

  • Svazek periodika

    35

  • Číslo periodika v rámci svazku

    3

  • Stát vydavatele periodika

    US - Spojené státy americké

  • Počet stran výsledku

    21

  • Strana od-do

    3159-3179

  • Kód UT WoS článku

  • EID výsledku v databázi Scopus

    2-s2.0-85118115176