Stronger Separation of Analog Neuron Hierarchy by Deterministic Context-Free Languages
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F67985807%3A_____%2F22%3A00536423" target="_blank" >RIV/67985807:_____/22:00536423 - isvavai.cz</a>
Výsledek na webu
<a href="http://dx.doi.org/10.1016/j.neucom.2021.12.107" target="_blank" >http://dx.doi.org/10.1016/j.neucom.2021.12.107</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1016/j.neucom.2021.12.107" target="_blank" >10.1016/j.neucom.2021.12.107</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Stronger Separation of Analog Neuron Hierarchy by Deterministic Context-Free Languages
Popis výsledku v původním jazyce
We analyze the computational power of discrete-time recurrent neural networks (NNs) with the saturated-linear activation function within the Chomsky hierarchy. This model restricted to integer weights coincides with binary-state NNs with the Heaviside activation function, which are equivalent to finite automata (Chomsky level 3) recognizing regular languages (REG), while rational weights make this model Turing-complete even for three analog-state units (Chomsky level 0). For the intermediate model αANN of a binary-state NN that is extended with α>=0 extra analog-state neurons with rational weights, we have established the analog neuron hierarchy 0ANNs subset 1ANNs subset 2ANNs subseteq 3ANNs. The separation 1ANNs subsetneqq 2ANNs has been witnessed by the non-regular deterministic context-free language (DCFL) L_#={0^n1^n|n>=1} which cannot be recognized by any 1ANN even with real weights, while any DCFL (Chomsky level 2) is accepted by a 2ANN with rational weights. In this paper, we strengthen this separation by showing that any non-regular DCFL cannot be recognized by 1ANNs with real weights, which means (DCFLs-REG) subset (2ANNs-1ANNs), implying 1ANNs cap DCFLs = 0ANNs. For this purpose, we have shown that L_# is the simplest non-regular DCFL by reducing L_# to any language in this class, which is by itself an interesting achievement in computability theory.
Název v anglickém jazyce
Stronger Separation of Analog Neuron Hierarchy by Deterministic Context-Free Languages
Popis výsledku anglicky
We analyze the computational power of discrete-time recurrent neural networks (NNs) with the saturated-linear activation function within the Chomsky hierarchy. This model restricted to integer weights coincides with binary-state NNs with the Heaviside activation function, which are equivalent to finite automata (Chomsky level 3) recognizing regular languages (REG), while rational weights make this model Turing-complete even for three analog-state units (Chomsky level 0). For the intermediate model αANN of a binary-state NN that is extended with α>=0 extra analog-state neurons with rational weights, we have established the analog neuron hierarchy 0ANNs subset 1ANNs subset 2ANNs subseteq 3ANNs. The separation 1ANNs subsetneqq 2ANNs has been witnessed by the non-regular deterministic context-free language (DCFL) L_#={0^n1^n|n>=1} which cannot be recognized by any 1ANN even with real weights, while any DCFL (Chomsky level 2) is accepted by a 2ANN with rational weights. In this paper, we strengthen this separation by showing that any non-regular DCFL cannot be recognized by 1ANNs with real weights, which means (DCFLs-REG) subset (2ANNs-1ANNs), implying 1ANNs cap DCFLs = 0ANNs. For this purpose, we have shown that L_# is the simplest non-regular DCFL by reducing L_# to any language in this class, which is by itself an interesting achievement in computability theory.
Klasifikace
Druh
J<sub>imp</sub> - Článek v periodiku v databázi Web of Science
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
<a href="/cs/project/GA19-05704S" target="_blank" >GA19-05704S: FoNeCo: Analytické základy neurovýpočtů</a><br>
Návaznosti
I - Institucionalni podpora na dlouhodoby koncepcni rozvoj vyzkumne organizace
Ostatní
Rok uplatnění
2022
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název periodika
Neurocomputing
ISSN
0925-2312
e-ISSN
1872-8286
Svazek periodika
493
Číslo periodika v rámci svazku
July 2022
Stát vydavatele periodika
NL - Nizozemsko
Počet stran výsledku
8
Strana od-do
605-612
Kód UT WoS článku
000800351800012
EID výsledku v databázi Scopus
2-s2.0-85124166634