A Regularization Post Layer: An Additional Way how to Make Deep Neural Networks Robust
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F49777513%3A23520%2F17%3A43932981" target="_blank" >RIV/49777513:23520/17:43932981 - isvavai.cz</a>
Result on the web
<a href="https://link.springer.com/chapter/10.1007/978-3-319-68456-7_17#citeas" target="_blank" >https://link.springer.com/chapter/10.1007/978-3-319-68456-7_17#citeas</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1007/978-3-319-68456-7_17" target="_blank" >10.1007/978-3-319-68456-7_17</a>
Alternative languages
Result language
angličtina
Original language name
A Regularization Post Layer: An Additional Way how to Make Deep Neural Networks Robust
Original language description
Neural Networks (NNs) are prone to overfitting. Especially, the Deep Neural Networks in the cases where the training data are not abundant. There are several techniques which allow us to prevent the overfitting, e.g., L1/L2 regularization, unsupervised pre-training, early training stopping, dropout, bootstrapping or cross-validation models aggregation. In this paper, we proposed a regularization post-layer that may be combined with prior techniques, and it brings additional robust- ness to the NN. We trained the regularization post-layer in the cross- validation (CV) aggregation scenario: we used the CV held-out folds to train an additional neural network post-layer that boosts the network robustness. We have tested various post-layer topologies and compared results with other regularization techniques. As a benchmark task, we have selected the TIMIT phone recognition which is a well-known and still favorite task where the training data are limited, and the used reg- ularization techniques play a key role. However, the regularization post- layer is a general method, and it may be employed in any classification task.
Czech name
—
Czech description
—
Classification
Type
D - Article in proceedings
CEP classification
—
OECD FORD branch
20205 - Automation and control systems
Result continuities
Project
<a href="/en/project/GBP103%2F12%2FG084" target="_blank" >GBP103/12/G084: Center for Large Scale Multi-modal Data Interpretation</a><br>
Continuities
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Others
Publication year
2017
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Article name in the collection
Statistical Language and Speech Processing 5th International Conference, SLSP 2017, Le Mans, France, October 23–25, 2017, Proceedings
ISBN
978-3-319-68455-0
ISSN
0302-9743
e-ISSN
neuvedeno
Number of pages
11
Pages from-to
204-214
Publisher name
Springer
Place of publication
Cham
Event location
Le Mans, Francie
Event date
Oct 23, 2017
Type of event by nationality
WRD - Celosvětová akce
UT code for WoS article
—