Highly Robust Training of Regularized Radial Basis Function Networks
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F67985807%3A_____%2F24%3A00586040" target="_blank" >RIV/67985807:_____/24:00586040 - isvavai.cz</a>
Výsledek na webu
<a href="https://doi.org/85190739311" target="_blank" >https://doi.org/85190739311</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.14736/kyb-2024-1-0038" target="_blank" >10.14736/kyb-2024-1-0038</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Highly Robust Training of Regularized Radial Basis Function Networks
Popis výsledku v původním jazyce
Radial basis function (RBF) networks represent established tools for nonlinear regression modeling with numerous applications in various fields. Because their standard training is vulnerable with respect to the presence of outliers in the data, several robust methods for RBF network training have been proposed recently. This paper is interested in robust regularized RBF networks. A robust inter-quantile version of RBF networks based on trimmed least squares is proposed here. Then, a systematic comparison of robust regularized RBF networks follows, which is evaluated over a set of 405 networks trained using various combinations of robustness and regularization types. The experiments proceed with a particular focus on the effect of variable selection, which is performed by means of a backward procedure, on the optimal number of RBF units. The regularized inter-quantile RBF networks based on trimmed least squares turn out to outperform the competing approaches in the experiments if a highly robust prediction error measure is considered.
Název v anglickém jazyce
Highly Robust Training of Regularized Radial Basis Function Networks
Popis výsledku anglicky
Radial basis function (RBF) networks represent established tools for nonlinear regression modeling with numerous applications in various fields. Because their standard training is vulnerable with respect to the presence of outliers in the data, several robust methods for RBF network training have been proposed recently. This paper is interested in robust regularized RBF networks. A robust inter-quantile version of RBF networks based on trimmed least squares is proposed here. Then, a systematic comparison of robust regularized RBF networks follows, which is evaluated over a set of 405 networks trained using various combinations of robustness and regularization types. The experiments proceed with a particular focus on the effect of variable selection, which is performed by means of a backward procedure, on the optimal number of RBF units. The regularized inter-quantile RBF networks based on trimmed least squares turn out to outperform the competing approaches in the experiments if a highly robust prediction error measure is considered.
Klasifikace
Druh
J<sub>imp</sub> - Článek v periodiku v databázi Web of Science
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
<a href="/cs/project/GA22-02067S" target="_blank" >GA22-02067S: AppNeCo: Aproximativní neurovýpočty</a><br>
Návaznosti
I - Institucionalni podpora na dlouhodoby koncepcni rozvoj vyzkumne organizace
Ostatní
Rok uplatnění
2024
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název periodika
Kybernetika
ISSN
0023-5954
e-ISSN
—
Svazek periodika
60
Číslo periodika v rámci svazku
1
Stát vydavatele periodika
CZ - Česká republika
Počet stran výsledku
22
Strana od-do
38-59
Kód UT WoS článku
001202833900001
EID výsledku v databázi Scopus
2-s2.0-85190739311