Using Adversarial Examples in Natural Language Processing
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F18%3A10390133" target="_blank" >RIV/00216208:11320/18:10390133 - isvavai.cz</a>
Výsledek na webu
<a href="http://www.lrec-conf.org/proceedings/lrec2018/summaries/852.html" target="_blank" >http://www.lrec-conf.org/proceedings/lrec2018/summaries/852.html</a>
DOI - Digital Object Identifier
—
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Using Adversarial Examples in Natural Language Processing
Popis výsledku v původním jazyce
Machine learning models have been providing promising results in many fields including natural language processing. These models are, nevertheless, prone to adversarial examples. These are artificially constructed examples which evince two main features: they resemble the real training data but they deceive already trained model. This paper investigates the effect of using adversarial examples during the training of recurrent neural networks whose text input is in the form of a sequence of word/character embeddings. The effects are studied on a compilation of eight NLP datasets whose interface was unified for quick experimenting. Based on the experiments and the dataset characteristics, we conclude that using the adversarial examples for NLP tasks that are modeled by recurrent neural networks provides a regularization effect and enables the training of models with greater number of parameters without overfitting. In addition, we discuss which combinations of datasets and model settings might benefit f
Název v anglickém jazyce
Using Adversarial Examples in Natural Language Processing
Popis výsledku anglicky
Machine learning models have been providing promising results in many fields including natural language processing. These models are, nevertheless, prone to adversarial examples. These are artificially constructed examples which evince two main features: they resemble the real training data but they deceive already trained model. This paper investigates the effect of using adversarial examples during the training of recurrent neural networks whose text input is in the form of a sequence of word/character embeddings. The effects are studied on a compilation of eight NLP datasets whose interface was unified for quick experimenting. Based on the experiments and the dataset characteristics, we conclude that using the adversarial examples for NLP tasks that are modeled by recurrent neural networks provides a regularization effect and enables the training of models with greater number of parameters without overfitting. In addition, we discuss which combinations of datasets and model settings might benefit f
Klasifikace
Druh
D - Stať ve sborníku
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
—
Návaznosti
S - Specificky vyzkum na vysokych skolach<br>I - Institucionalni podpora na dlouhodoby koncepcni rozvoj vyzkumne organizace
Ostatní
Rok uplatnění
2018
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název statě ve sborníku
Proceedings of the 11th International Conference on Language Resources and Evaluation (LREC 2018)
ISBN
979-10-95546-00-9
ISSN
—
e-ISSN
neuvedeno
Počet stran výsledku
8
Strana od-do
3693-3700
Název nakladatele
European Language Resources Association
Místo vydání
Paris, France
Místo konání akce
Miyazaki, Japan
Datum konání akce
7. 5. 2018
Typ akce podle státní příslušnosti
WRD - Celosvětová akce
Kód UT WoS článku
—