All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

Stronger Separation of Analog Neuron Hierarchy by Deterministic Context-Free Languages

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F67985807%3A_____%2F22%3A00536423" target="_blank" >RIV/67985807:_____/22:00536423 - isvavai.cz</a>

  • Result on the web

    <a href="http://dx.doi.org/10.1016/j.neucom.2021.12.107" target="_blank" >http://dx.doi.org/10.1016/j.neucom.2021.12.107</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.1016/j.neucom.2021.12.107" target="_blank" >10.1016/j.neucom.2021.12.107</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    Stronger Separation of Analog Neuron Hierarchy by Deterministic Context-Free Languages

  • Original language description

    We analyze the computational power of discrete-time recurrent neural networks (NNs) with the saturated-linear activation function within the Chomsky hierarchy. This model restricted to integer weights coincides with binary-state NNs with the Heaviside activation function, which are equivalent to finite automata (Chomsky level 3) recognizing regular languages (REG), while rational weights make this model Turing-complete even for three analog-state units (Chomsky level 0). For the intermediate model αANN of a binary-state NN that is extended with α>=0 extra analog-state neurons with rational weights, we have established the analog neuron hierarchy 0ANNs subset 1ANNs subset 2ANNs subseteq 3ANNs. The separation 1ANNs subsetneqq 2ANNs has been witnessed by the non-regular deterministic context-free language (DCFL) L_#={0^n1^n|n>=1} which cannot be recognized by any 1ANN even with real weights, while any DCFL (Chomsky level 2) is accepted by a 2ANN with rational weights. In this paper, we strengthen this separation by showing that any non-regular DCFL cannot be recognized by 1ANNs with real weights, which means (DCFLs-REG) subset (2ANNs-1ANNs), implying 1ANNs cap DCFLs = 0ANNs. For this purpose, we have shown that L_# is the simplest non-regular DCFL by reducing L_# to any language in this class, which is by itself an interesting achievement in computability theory.

  • Czech name

  • Czech description

Classification

  • Type

    J<sub>imp</sub> - Article in a specialist periodical, which is included in the Web of Science database

  • CEP classification

  • OECD FORD branch

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Result continuities

  • Project

    <a href="/en/project/GA19-05704S" target="_blank" >GA19-05704S: FoNeCo: Analytical Foundations of Neurocomputing</a><br>

  • Continuities

    I - Institucionalni podpora na dlouhodoby koncepcni rozvoj vyzkumne organizace

Others

  • Publication year

    2022

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Name of the periodical

    Neurocomputing

  • ISSN

    0925-2312

  • e-ISSN

    1872-8286

  • Volume of the periodical

    493

  • Issue of the periodical within the volume

    July 2022

  • Country of publishing house

    NL - THE KINGDOM OF THE NETHERLANDS

  • Number of pages

    8

  • Pages from-to

    605-612

  • UT code for WoS article

    000800351800012

  • EID of the result in the Scopus database

    2-s2.0-85124166634