On Combining Robustness and Regularization in Training Multilayer Perceptrons over Small Data
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F67985807%3A_____%2F22%3A00562371" target="_blank" >RIV/67985807:_____/22:00562371 - isvavai.cz</a>
Nalezeny alternativní kódy
RIV/00216208:11320/22:10450901
Výsledek na webu
<a href="https://dx.doi.org/10.1109/IJCNN55064.2022.9892838" target="_blank" >https://dx.doi.org/10.1109/IJCNN55064.2022.9892838</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1109/IJCNN55064.2022.9892510" target="_blank" >10.1109/IJCNN55064.2022.9892510</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
On Combining Robustness and Regularization in Training Multilayer Perceptrons over Small Data
Popis výsledku v původním jazyce
Multilayer perceptrons (MLPs) continue to be commonly used for nonlinear regression modeling in numerous applications. Available robust approaches to training MLPs, which allow to yield reliable results also for data contaminated by outliers, have not much penetrated to real applications so far. Besides, there remains a lack of systematic comparisons of the performance of robust MLPs, if their training uses one of regularization techniques, which are available for standard MLPs to prevent overfitting. This paper is interested in comparing the performance of MLPs trained with various combinations of robust loss functions and regularization types on small datasets. The experiments start with MLPs trained on individual datasets, which allow graphical visualizations, and proceed to a study on a set of 163251 MLPs trained on well known benchmarks using various combinations of robustness and regularization types. Huber loss combined with L2 - regularization turns out to outperform other choices. This combination is recommendable whenever the data do not contain a large proportion of outliers.
Název v anglickém jazyce
On Combining Robustness and Regularization in Training Multilayer Perceptrons over Small Data
Popis výsledku anglicky
Multilayer perceptrons (MLPs) continue to be commonly used for nonlinear regression modeling in numerous applications. Available robust approaches to training MLPs, which allow to yield reliable results also for data contaminated by outliers, have not much penetrated to real applications so far. Besides, there remains a lack of systematic comparisons of the performance of robust MLPs, if their training uses one of regularization techniques, which are available for standard MLPs to prevent overfitting. This paper is interested in comparing the performance of MLPs trained with various combinations of robust loss functions and regularization types on small datasets. The experiments start with MLPs trained on individual datasets, which allow graphical visualizations, and proceed to a study on a set of 163251 MLPs trained on well known benchmarks using various combinations of robustness and regularization types. Huber loss combined with L2 - regularization turns out to outperform other choices. This combination is recommendable whenever the data do not contain a large proportion of outliers.
Klasifikace
Druh
D - Stať ve sborníku
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
<a href="/cs/project/GA22-02067S" target="_blank" >GA22-02067S: AppNeCo: Aproximativní neurovýpočty</a><br>
Návaznosti
I - Institucionalni podpora na dlouhodoby koncepcni rozvoj vyzkumne organizace
Ostatní
Rok uplatnění
2022
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název statě ve sborníku
2022 International Joint Conference on Neural Networks (IJCNN) Proceedings
ISBN
978-1-7281-8671-9
ISSN
—
e-ISSN
—
Počet stran výsledku
8
Strana od-do
—
Název nakladatele
IEEE
Místo vydání
Piscataway
Místo konání akce
Padua
Datum konání akce
18. 7. 2022
Typ akce podle státní příslušnosti
WRD - Celosvětová akce
Kód UT WoS článku
000867070905022