All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

Mean-field Analysis of Piecewise Linear Solutions for Wide ReLU Networks

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21230%2F22%3A00358262" target="_blank" >RIV/68407700:21230/22:00358262 - isvavai.cz</a>

  • Result on the web

    <a href="https://doi.org/10.48550/arXiv.2111.02278" target="_blank" >https://doi.org/10.48550/arXiv.2111.02278</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.48550/arXiv.2111.02278" target="_blank" >10.48550/arXiv.2111.02278</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    Mean-field Analysis of Piecewise Linear Solutions for Wide ReLU Networks

  • Original language description

    Understanding the properties of neural networks trained via stochastic gradient descent (SGD) is at the heart of the theory of deep learning. In this work, we take a mean- field view, and consider a two-layer ReLU network trained via noisy-SGD for a univariate regularized regression problem. Our main result is that SGD with vanishingly small noise injected in the gradients is biased towards a simple solution: at convergence, the ReLU network implements a piecewise linear map of the inputs, and the number of knot"points { i.e., points where the tangent of the ReLU network estimator changes { between two consecutive training inputs is at most three. In particular, as the number of neurons of the network grows, the SGD dynamics is captured by the solution of a gradient ow and, at convergence, the distribution of the weights approaches the unique minimizer of a related free energy, which has a Gibbs form. Our key technical contribution consists in the analysis of the estimator resulting from this minimizer: we show that its second derivative vanishes everywhere, except at some specific locations which represent the knot"points. We also provide empirical evidence that knots at locations distinct from the data points might occur, as predicted by our theory.

  • Czech name

  • Czech description

Classification

  • Type

    J<sub>SC</sub> - Article in a specialist periodical, which is included in the SCOPUS database

  • CEP classification

  • OECD FORD branch

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Result continuities

  • Project

  • Continuities

    I - Institucionalni podpora na dlouhodoby koncepcni rozvoj vyzkumne organizace

Others

  • Publication year

    2022

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Name of the periodical

    Journal of Machine Learning Research

  • ISSN

    1532-4435

  • e-ISSN

  • Volume of the periodical

    23

  • Issue of the periodical within the volume

    130

  • Country of publishing house

    US - UNITED STATES

  • Number of pages

    55

  • Pages from-to

    1-55

  • UT code for WoS article

  • EID of the result in the Scopus database

    2-s2.0-85130359653