A Regularization Post Layer: An Additional Way how to Make Deep Neural Networks Robust
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F49777513%3A23520%2F17%3A43932981" target="_blank" >RIV/49777513:23520/17:43932981 - isvavai.cz</a>
Výsledek na webu
<a href="https://link.springer.com/chapter/10.1007/978-3-319-68456-7_17#citeas" target="_blank" >https://link.springer.com/chapter/10.1007/978-3-319-68456-7_17#citeas</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1007/978-3-319-68456-7_17" target="_blank" >10.1007/978-3-319-68456-7_17</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
A Regularization Post Layer: An Additional Way how to Make Deep Neural Networks Robust
Popis výsledku v původním jazyce
Neural Networks (NNs) are prone to overfitting. Especially, the Deep Neural Networks in the cases where the training data are not abundant. There are several techniques which allow us to prevent the overfitting, e.g., L1/L2 regularization, unsupervised pre-training, early training stopping, dropout, bootstrapping or cross-validation models aggregation. In this paper, we proposed a regularization post-layer that may be combined with prior techniques, and it brings additional robust- ness to the NN. We trained the regularization post-layer in the cross- validation (CV) aggregation scenario: we used the CV held-out folds to train an additional neural network post-layer that boosts the network robustness. We have tested various post-layer topologies and compared results with other regularization techniques. As a benchmark task, we have selected the TIMIT phone recognition which is a well-known and still favorite task where the training data are limited, and the used reg- ularization techniques play a key role. However, the regularization post- layer is a general method, and it may be employed in any classification task.
Název v anglickém jazyce
A Regularization Post Layer: An Additional Way how to Make Deep Neural Networks Robust
Popis výsledku anglicky
Neural Networks (NNs) are prone to overfitting. Especially, the Deep Neural Networks in the cases where the training data are not abundant. There are several techniques which allow us to prevent the overfitting, e.g., L1/L2 regularization, unsupervised pre-training, early training stopping, dropout, bootstrapping or cross-validation models aggregation. In this paper, we proposed a regularization post-layer that may be combined with prior techniques, and it brings additional robust- ness to the NN. We trained the regularization post-layer in the cross- validation (CV) aggregation scenario: we used the CV held-out folds to train an additional neural network post-layer that boosts the network robustness. We have tested various post-layer topologies and compared results with other regularization techniques. As a benchmark task, we have selected the TIMIT phone recognition which is a well-known and still favorite task where the training data are limited, and the used reg- ularization techniques play a key role. However, the regularization post- layer is a general method, and it may be employed in any classification task.
Klasifikace
Druh
D - Stať ve sborníku
CEP obor
—
OECD FORD obor
20205 - Automation and control systems
Návaznosti výsledku
Projekt
<a href="/cs/project/GBP103%2F12%2FG084" target="_blank" >GBP103/12/G084: Centrum pro multi-modální interpretaci dat velkého rozsahu</a><br>
Návaznosti
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Ostatní
Rok uplatnění
2017
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název statě ve sborníku
Statistical Language and Speech Processing 5th International Conference, SLSP 2017, Le Mans, France, October 23–25, 2017, Proceedings
ISBN
978-3-319-68455-0
ISSN
0302-9743
e-ISSN
neuvedeno
Počet stran výsledku
11
Strana od-do
204-214
Název nakladatele
Springer
Místo vydání
Cham
Místo konání akce
Le Mans, Francie
Datum konání akce
23. 10. 2017
Typ akce podle státní příslušnosti
WRD - Celosvětová akce
Kód UT WoS článku
—