All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

Optimal Control via Reinforcement Learning with Symbolic Policy Approximation

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21730%2F17%3A00316260" target="_blank" >RIV/68407700:21730/17:00316260 - isvavai.cz</a>

  • Result on the web

    <a href="https://www.sciencedirect.com/science/article/pii/S2405896317312594" target="_blank" >https://www.sciencedirect.com/science/article/pii/S2405896317312594</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.1016/j.ifacol.2017.08.805" target="_blank" >10.1016/j.ifacol.2017.08.805</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    Optimal Control via Reinforcement Learning with Symbolic Policy Approximation

  • Original language description

    Model-based reinforcement learning (RL) algorithms can be used to derive optimal control laws for nonlinear dynamic systems. With continuous-valued state and input variables, RL algorithms have to rely on function approximators to represent the value function and policy mappings. This paper addresses the problem of finding a smooth policy based on the value function represented by means of a basis-function approximator. We first show that policies derived directly from the value function or represented explicitly by the same type of approximator lead to inferior control performance, manifested by non-smooth control signals and steady-state errors. We then propose a novel method to construct a smooth policy represented by an analytic equation, obtained by means of symbolic regression. The proposed method is illustrated on a reference-tracking problem of a 1-DOF robot arm operating under the influence of gravity. The results show that the analytic control law performs at least equally well as the original numerically approximated policy, while it leads to much smoother control signals. In addition, the analytic function is readable (as opposed to black-box approximators) and can be used in further analysis and synthesis of the closed loop.

  • Czech name

  • Czech description

Classification

  • Type

    D - Article in proceedings

  • CEP classification

  • OECD FORD branch

    20205 - Automation and control systems

Result continuities

  • Project

    <a href="/en/project/GA15-22731S" target="_blank" >GA15-22731S: Symbolic Regression for Reinforcement Learning in Continuous Spaces</a><br>

  • Continuities

    P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)

Others

  • Publication year

    2017

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Article name in the collection

    Proceedings of the IFAC 2017 World Congress

  • ISBN

  • ISSN

    2405-8963

  • e-ISSN

    2405-8963

  • Number of pages

    6

  • Pages from-to

    4162-4167

  • Publisher name

    Elsevier

  • Place of publication

    Kidlington Oxford OX GB

  • Event location

    Toulouse

  • Event date

    Jul 9, 2017

  • Type of event by nationality

    WRD - Celosvětová akce

  • UT code for WoS article