Filters
Vulnerability of Machine Learning Models to Adversarial Examples
We propose a genetic algorithm for generating adversarial examples for machine learning models. Such approach is able to find adversarial examples without the access kernels tend to be rather robust and not prone t...
IN - Informatika
- 2016 •
- D •
- Link
Rok uplatnění
D - Stať ve sborníku
Výsledek na webu
Vulnerability of classifiers to evolutionary generated adversarial examples
to adversarial examples and its implication for robustness and generalization properties. We propose an evolutionary algorithm that can generate adversarial examples for any adversarial examples ...
Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
- 2020 •
- Jimp •
- Link
Rok uplatnění
Jimp - Článek v periodiku v databázi Web of Science
Výsledek na webu
When Should You Defend Your Classifier? A Game-Theoretical Analysis of Countermeasures Against Adversarial Examples
Adversarial machine learning, i.e., increasing the robustness of machine learning algorithms against so-called adversarial examples, is now an established field of adversarial examples. where costs for
Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
- 2021 •
- D •
- Link
Rok uplatnění
D - Stať ve sborníku
Výsledek na webu
Evolutionary Generation of Adversarial Examples for Deep and Shallow Machine Learning Models
Studying vulnerability of machine learning models to adversarial examples. In this paper, we propose a genetic algorithm for generating adversarial examples for machine learning models. Such approach is able to fin...
IN - Informatika
- 2016 •
- D •
- Link
Rok uplatnění
D - Stať ve sborníku
Výsledek na webu
Deep Networks with RBF Layers to Prevent Adversarial Examples
We propose a simple way to increase the robustness of deep neural network models to adversarial examples. The new architecture obtained by stacking deep neural network and RBF network is proposed. It is shown on experiments that suc...
Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
- 2018 •
- D •
- Link
Rok uplatnění
D - Stať ve sborníku
Výsledek na webu
Using Adversarial Examples in Natural Language Processing
to adversarial examples. These are artificially constructed examples which evince two main. This paper investigates the effect of using adversarial examples during the training and the dataset characteris...
Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
- 2018 •
- D •
- Link
Rok uplatnění
D - Stať ve sborníku
Výsledek na webu
Using Local Convolutional Units to Defend against Adversarial Examples
Deep neural networks are known to be sensitive to adversarial examples - inputs that are created in such a way that they are similar (if viewed by people) to clean inputs by the new layer (called RBFolutional), we obtain better robu...
Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
- 2019 •
- D •
- Link
Rok uplatnění
D - Stať ve sborníku
Výsledek na webu
Black-box Evolutionary Search for Adversarial Examples against Deep Image Classifiers in Non-Targeted Attacks
Machine learning models exhibit vulnerability to adversarial examples i.e., artificially created inputs that become misinterpreted. The goal of this paper is to explore non-targeted black-box adversarial attacks on deep net...
Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
- 2020 •
- D •
- Link
Rok uplatnění
D - Stať ve sborníku
Výsledek na webu
Adversarial Examples by Perturbing High-level Features in Intermediate Decoder Layers
We propose a novel method for creating adversarial examples. Instead as an optimization problem by minimizing the Wasserstein distance between the adversarial are feasible, and our method always generates adversarial
Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
- 2022 •
- D •
- Link
Rok uplatnění
D - Stať ve sborníku
Výsledek na webu
Towards a Robust Deep Neural Network Against Adversarial Texts: A Survey
to adversarial examples, which cause erroneous predictions by adding imperceptible perturbations into legitimate inputs. Recently, studies have revealed adversarial examples give a comprehensive survey on the exis...
Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
- 2023 •
- JSC •
- Link
Rok uplatnění
JSC - Článek v periodiku v databázi SCOPUS
Výsledek na webu
- 1 - 10 out of 51 775