Vulnerability of classifiers to evolutionary generated adversarial examples
Result description
This paper deals with the vulnerability of machine learning models to adversarial examples and its implication for robustness and generalization properties. We propose an evolutionary algorithm that can generate adversarial examples for any machine learning model in the black-box attack scenario. This way, we can find adversarial examples without access to model’s parameters, only by querying the model at hand. We have tested a range of machine learning models including deep and shallow neural networks. Our experiments have shown that the vulnerability to adversarial examples is not only the problem of deep networks, but it spreads through various machine learning architectures. Rather, it depends on the type of computational units. Local units, such as Gaussian kernels, are less vulnerable to adversarial examples.
Keywords
supervised learningneural networkskernel methodsgenetic algorithmsadversarial examples
The result's identifiers
Result code in IS VaVaI
Result on the web
DOI - Digital Object Identifier
Alternative languages
Result language
angličtina
Original language name
Vulnerability of classifiers to evolutionary generated adversarial examples
Original language description
This paper deals with the vulnerability of machine learning models to adversarial examples and its implication for robustness and generalization properties. We propose an evolutionary algorithm that can generate adversarial examples for any machine learning model in the black-box attack scenario. This way, we can find adversarial examples without access to model’s parameters, only by querying the model at hand. We have tested a range of machine learning models including deep and shallow neural networks. Our experiments have shown that the vulnerability to adversarial examples is not only the problem of deep networks, but it spreads through various machine learning architectures. Rather, it depends on the type of computational units. Local units, such as Gaussian kernels, are less vulnerable to adversarial examples.
Czech name
—
Czech description
—
Classification
Type
Jimp - Article in a specialist periodical, which is included in the Web of Science database
CEP classification
—
OECD FORD branch
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Result continuities
Project
GA18-23827S: Capabilities and limitations of shallow and deep networks
Continuities
I - Institucionalni podpora na dlouhodoby koncepcni rozvoj vyzkumne organizace
Others
Publication year
2020
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Name of the periodical
Neural Networks
ISSN
0893-6080
e-ISSN
—
Volume of the periodical
127
Issue of the periodical within the volume
July
Country of publishing house
GB - UNITED KINGDOM
Number of pages
14
Pages from-to
168-181
UT code for WoS article
000536453100016
EID of the result in the Scopus database
2-s2.0-85083895880
Basic information
Result type
Jimp - Article in a specialist periodical, which is included in the Web of Science database
OECD FORD
Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Year of implementation
2020