All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

Proxy Functions for Approximate Reinforcement Learning

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21230%2F19%3A00337291" target="_blank" >RIV/68407700:21230/19:00337291 - isvavai.cz</a>

  • Alternative codes found

    RIV/68407700:21730/19:00337291

  • Result on the web

    <a href="https://www.sciencedirect.com/science/article/pii/S240589631930775X" target="_blank" >https://www.sciencedirect.com/science/article/pii/S240589631930775X</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.1016/j.ifacol.2019.09.145" target="_blank" >10.1016/j.ifacol.2019.09.145</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    Proxy Functions for Approximate Reinforcement Learning

  • Original language description

    Approximate Reinforcement Learning (RL) is a method to solve sequential decisionmaking and dynamic control problems in an optimal way. This paper addresses RL for continuous state spaces which derive the control policy by using an approximate value function (V-function). The standard approach to derive a policy through the V-function is analogous to hill climbing: at each state the RL agent chooses the control input that maximizes the right-hand side of the Bellman equation. Although theoretically optimal, the actual control performance of this method is heavily influenced by the local smoothness of the V-function; a lack of smoothness results in undesired closed-loop behavior with input chattering or limit-cycles. To circumvent these problems, this paper provides a method based on Symbolic Regression to generate a locally smooth proxy to the V-function. The proposed method has been evaluated on two nonlinear control benchmarks: pendulum swing-up and magnetic manipulation. The new method has been compared with the standard policy derivation technique using the approximate V-function and the results show that the proposed approach outperforms the standard one with respect to the cumulative return.

  • Czech name

  • Czech description

Classification

  • Type

    D - Article in proceedings

  • CEP classification

  • OECD FORD branch

    20204 - Robotics and automatic control

Result continuities

  • Project

    Result was created during the realization of more than one project. More information in the Projects tab.

  • Continuities

    P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)

Others

  • Publication year

    2019

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Article name in the collection

    IFAC-PapersOnLine - 5th IFAC Conference on Intelligent Control and Automation Sciences ICONS 2019

  • ISBN

  • ISSN

    2405-8963

  • e-ISSN

    2405-8963

  • Number of pages

    6

  • Pages from-to

    224-229

  • Publisher name

    Elsevier

  • Place of publication

    Lausanne

  • Event location

    Belfast

  • Event date

    Aug 21, 2019

  • Type of event by nationality

    WRD - Celosvětová akce

  • UT code for WoS article

    000493064700039