All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

A hybrid approach for adversarial attack detection based on sentiment analysis model using Machine learning

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F61989100%3A27240%2F24%3A10255602" target="_blank" >RIV/61989100:27240/24:10255602 - isvavai.cz</a>

  • Result on the web

    <a href="https://www.sciencedirect.com/science/article/pii/S2215098624002155?via%3Dihub" target="_blank" >https://www.sciencedirect.com/science/article/pii/S2215098624002155?via%3Dihub</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.1016/j.jestch.2024.101829" target="_blank" >10.1016/j.jestch.2024.101829</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    A hybrid approach for adversarial attack detection based on sentiment analysis model using Machine learning

  • Original language description

    One of the main subfields of Machine Learning (ML) that deals with human language for intelligent applications is Natural Language Processing (NLP). One of the biggest problems NLP models encounter is adversarial assaults, which lead to inaccurate predictions. To increase an NLP model&apos;s resilience, adversarial text must be used to examine assaults and defenses. several strategies for detecting adversarial attacks have been put forth; nonetheless, they face several obstacles, such as low attack success rates on particular datasets. Some other attack methods can already be effectively defended against by existing defensive strategies. As a result, such attackers are unable to delve further into the limitations of NLP models to guide future advancements in defense. Consequently, it is required to develop an adversarial attack strategy with a larger attack duration and better performance. Firstly, we train the Convolutional Neural Network (CNN) using the IMDB dataset, which consists of labeled movie reviews that represent positive and negative sentiments on movie reviews. The CNN model performs the sentiment classification of data. Subsequently, adversarial examples are generated from the IMDB dataset utilizing the Fast Gradient Sign Method (FGSM), a well-liked and effective method in the adversarial machine learning domain. After that, a Long Short-Term Memory (LSTM) model is developed utilizing the FGSM-generated hostile cases to identify adversarial attempts on sentiment analysis systems. The LSTM model was trained using a combination of original IMDB data and adversarial cases generated using the FGSM technique. The models are tested on various standard metrics including Accuracy, precision, F1-score, etc., and it achieve about 95.6% accuracy in detecting adversarial attacks. (C) 2024 THE AUTHORS

  • Czech name

  • Czech description

Classification

  • Type

    J<sub>imp</sub> - Article in a specialist periodical, which is included in the Web of Science database

  • CEP classification

  • OECD FORD branch

    20200 - Electrical engineering, Electronic engineering, Information engineering

Result continuities

  • Project

  • Continuities

    S - Specificky vyzkum na vysokych skolach

Others

  • Publication year

    2024

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Name of the periodical

    Engineering Science and Technology, an International Journal

  • ISSN

    2215-0986

  • e-ISSN

    2215-0986

  • Volume of the periodical

    58

  • Issue of the periodical within the volume

    October 2024

  • Country of publishing house

    NL - THE KINGDOM OF THE NETHERLANDS

  • Number of pages

    12

  • Pages from-to

  • UT code for WoS article

    001321666500001

  • EID of the result in the Scopus database

    2-s2.0-85204483722