Homeostatic learning rule for artificial neural networks
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21730%2F18%3A00321781" target="_blank" >RIV/68407700:21730/18:00321781 - isvavai.cz</a>
Výsledek na webu
<a href="http://dx.doi.org/10.14311/NNW.2018.28.011" target="_blank" >http://dx.doi.org/10.14311/NNW.2018.28.011</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.14311/NNW.2018.28.011" target="_blank" >10.14311/NNW.2018.28.011</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Homeostatic learning rule for artificial neural networks
Popis výsledku v původním jazyce
This article presents an improvement of learning algorithm for an artificial neural network that makes the learning process more similar to a biological neuron, but still simple enough to be easily programmed. This idea is based on autonomous artificial neurons that are working together and at same time competing for resources; every neuron is trying to be better than the others, but also needs the feed back from other neurons. The proposed artificial neuron has similar forward signal processing as the standard perceptron; the main difference is the learning phase. The learning process is based on observing the weights of other neurons, but only in biologically plausible way, no back propagation of error or 'teacher' is allowed. The neuron is sending the signal in a forward direction into the higher layer, while the information about its function is being propagated in the opposite direction. This information does not have the form of energy, it is the observation of how the neuron's output is accepted by the others. The neurons are trying to2nd such setting of their internal parameters that are optimal for the whole network. For this algorithm, it is necessary that the neurons are organized in layers. The tests proved the viability of this concept { the learning process is slower; but has other advantages, such as resistance against catastrophic interference or higher generalization.
Název v anglickém jazyce
Homeostatic learning rule for artificial neural networks
Popis výsledku anglicky
This article presents an improvement of learning algorithm for an artificial neural network that makes the learning process more similar to a biological neuron, but still simple enough to be easily programmed. This idea is based on autonomous artificial neurons that are working together and at same time competing for resources; every neuron is trying to be better than the others, but also needs the feed back from other neurons. The proposed artificial neuron has similar forward signal processing as the standard perceptron; the main difference is the learning phase. The learning process is based on observing the weights of other neurons, but only in biologically plausible way, no back propagation of error or 'teacher' is allowed. The neuron is sending the signal in a forward direction into the higher layer, while the information about its function is being propagated in the opposite direction. This information does not have the form of energy, it is the observation of how the neuron's output is accepted by the others. The neurons are trying to2nd such setting of their internal parameters that are optimal for the whole network. For this algorithm, it is necessary that the neurons are organized in layers. The tests proved the viability of this concept { the learning process is slower; but has other advantages, such as resistance against catastrophic interference or higher generalization.
Klasifikace
Druh
J<sub>ost</sub> - Ostatní články v recenzovaných periodicích
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
—
Návaznosti
N - Vyzkumna aktivita podporovana z neverejnych zdroju
Ostatní
Rok uplatnění
2018
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název periodika
Neural Network World
ISSN
1210-0552
e-ISSN
—
Svazek periodika
28
Číslo periodika v rámci svazku
6
Stát vydavatele periodika
CZ - Česká republika
Počet stran výsledku
11
Strana od-do
179-189
Kód UT WoS článku
—
EID výsledku v databázi Scopus
—