All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

Dynamic Neural Diversification: Path to Computationally Sustainable Neural Networks

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21240%2F21%3A00351376" target="_blank" >RIV/68407700:21240/21:00351376 - isvavai.cz</a>

  • Result on the web

    <a href="https://link.springer.com/chapter/10.1007/978-3-030-86340-1_19" target="_blank" >https://link.springer.com/chapter/10.1007/978-3-030-86340-1_19</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.1007/978-3-030-86340-1_19" target="_blank" >10.1007/978-3-030-86340-1_19</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    Dynamic Neural Diversification: Path to Computationally Sustainable Neural Networks

  • Original language description

    Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks, where now excessively large models are used. However, such models face several problems during the learning process, mainly due to the redundancy of the individual neurons, which results in sub-optimal accuracy or the need for additional training steps. Here, we explore the diversity of the neurons within the hidden layer during the learning process, and analyze how the diversity of the neurons affects predictions of the model. As following, we introduce several techniques to dynamically reinforce diversity between neurons during the training. These decorrelation techniques improve learning at early stages and occasionally help to overcome local minima faster. Additionally, we describe novel weight initialization method to obtain decorrelated, yet stochastic weight initialization for a fast and efficient neural network training. Decorrelated weight initialization in our case shows about 40% relative increase in test accuracy during the first 5 epochs.

  • Czech name

  • Czech description

Classification

  • Type

    D - Article in proceedings

  • CEP classification

  • OECD FORD branch

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Result continuities

  • Project

    Result was created during the realization of more than one project. More information in the Projects tab.

  • Continuities

    P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)

Others

  • Publication year

    2021

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Article name in the collection

    Artificial Neural Networks and Machine Learning – ICANN 2021

  • ISBN

    978-3-030-86339-5

  • ISSN

  • e-ISSN

    1611-3349

  • Number of pages

    13

  • Pages from-to

    235-247

  • Publisher name

    Springer

  • Place of publication

    Cham

  • Event location

    Bratislava

  • Event date

    Sep 14, 2021

  • Type of event by nationality

    WRD - Celosvětová akce

  • UT code for WoS article

    000711922300019