All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

Filters

51 775 (0,41s)

Result

Vulnerability of Machine Learning Models to Adversarial Examples

We propose a genetic algorithm for generating adversarial examples for machine learning models. Such approach is able to find adversarial examples without the access kernels tend to be rather robust and not prone t...

IN - Informatika

  • 2016
  • D
  • Link
Result

Vulnerability of classifiers to evolutionary generated adversarial examples

to adversarial examples and its implication for robustness and generalization properties. We propose an evolutionary algorithm that can generate adversarial examples for any adversarial examples ...

Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

  • 2020
  • Jimp
  • Link
Result

When Should You Defend Your Classifier? A Game-Theoretical Analysis of Countermeasures Against Adversarial Examples

Adversarial machine learning, i.e., increasing the robustness of machine learning algorithms against so-called adversarial examples, is now an established field of adversarial examples. where costs for

Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

  • 2021
  • D
  • Link
Result

Evolutionary Generation of Adversarial Examples for Deep and Shallow Machine Learning Models

Studying vulnerability of machine learning models to adversarial examples. In this paper, we propose a genetic algorithm for generating adversarial examples for machine learning models. Such approach is able to fin...

IN - Informatika

  • 2016
  • D
  • Link
Result

Deep Networks with RBF Layers to Prevent Adversarial Examples

We propose a simple way to increase the robustness of deep neural network models to adversarial examples. The new architecture obtained by stacking deep neural network and RBF network is proposed. It is shown on experiments that suc...

Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

  • 2018
  • D
  • Link
Result

Using Adversarial Examples in Natural Language Processing

to adversarial examples. These are artificially constructed examples which evince two main. This paper investigates the effect of using adversarial examples during the training and the dataset characteris...

Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

  • 2018
  • D
  • Link
Result

Using Local Convolutional Units to Defend against Adversarial Examples

Deep neural networks are known to be sensitive to adversarial examples - inputs that are created in such a way that they are similar (if viewed by people) to clean inputs by the new layer (called RBFolutional), we obtain better robu...

Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

  • 2019
  • D
  • Link
Result

Black-box Evolutionary Search for Adversarial Examples against Deep Image Classifiers in Non-Targeted Attacks

Machine learning models exhibit vulnerability to adversarial examples i.e., artificially created inputs that become misinterpreted. The goal of this paper is to explore non-targeted black-box adversarial attacks on deep net...

Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

  • 2020
  • D
  • Link
Result

Adversarial Examples by Perturbing High-level Features in Intermediate Decoder Layers

We propose a novel method for creating adversarial examples. Instead as an optimization problem by minimizing the Wasserstein distance between the adversarial are feasible, and our method always generates adversarial

Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

  • 2022
  • D
  • Link
Result

Towards a Robust Deep Neural Network Against Adversarial Texts: A Survey

to adversarial examples, which cause erroneous predictions by adding imperceptible perturbations into legitimate inputs. Recently, studies have revealed adversarial examples give a comprehensive survey on the exis...

Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

  • 2023
  • JSC
  • Link
  • 1 - 10 out of 51 775