All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

Decentralized Bayesian Learning with Metropolis-adjusted Hamiltonian Monte Carlo

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21230%2F23%3A00373589" target="_blank" >RIV/68407700:21230/23:00373589 - isvavai.cz</a>

  • Result on the web

    <a href="https://doi.org/10.1007/s10994-023-06345-6" target="_blank" >https://doi.org/10.1007/s10994-023-06345-6</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.1007/s10994-023-06345-6" target="_blank" >10.1007/s10994-023-06345-6</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    Decentralized Bayesian Learning with Metropolis-adjusted Hamiltonian Monte Carlo

  • Original language description

    Federated learning performed by a decentralized networks of agents is becoming increasingly important with the prevalence of embedded software on autonomous devices. Bayesian approaches to learning benefit from offering more information as to the uncertainty of a random quantity, and Langevin and Hamiltonian methods are effective at realizing sampling from an uncertain distribution with large parameter dimensions. Such methods have only recently appeared in the decentralized setting, and either exclusively use stochastic gradient Langevin and Hamiltonian Monte Carlo approaches that require a diminishing stepsize to asymptotically sample from the posterior and are known in practice to characterize uncertainty less faithfully than constant step-size methods with a Metropolis adjustment, or assume strong convexity properties of the potential function. We present the first approach to incorporating constant stepsize Metropolis-adjusted HMC in the decentralized sampling framework, show theoretical guarantees for consensus and probability distance to the posterior stationary distribution, and demonstrate their effectiveness numerically on standard real world problems, including decentralized learning of neural networks which is known to be highly non-convex.

  • Czech name

  • Czech description

Classification

  • Type

    J<sub>imp</sub> - Article in a specialist periodical, which is included in the Web of Science database

  • CEP classification

  • OECD FORD branch

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Result continuities

  • Project

    <a href="/en/project/EF16_019%2F0000765" target="_blank" >EF16_019/0000765: Research Center for Informatics</a><br>

  • Continuities

    P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)

Others

  • Publication year

    2023

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Name of the periodical

    Machine Learning

  • ISSN

    0885-6125

  • e-ISSN

    1573-0565

  • Volume of the periodical

    112

  • Issue of the periodical within the volume

    8

  • Country of publishing house

    US - UNITED STATES

  • Number of pages

    29

  • Pages from-to

    2791-2819

  • UT code for WoS article

    001015523900002

  • EID of the result in the Scopus database

    2-s2.0-85162198728