Proxy Functions for Approximate Reinforcement Learning
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21230%2F19%3A00337291" target="_blank" >RIV/68407700:21230/19:00337291 - isvavai.cz</a>
Nalezeny alternativní kódy
RIV/68407700:21730/19:00337291
Výsledek na webu
<a href="https://www.sciencedirect.com/science/article/pii/S240589631930775X" target="_blank" >https://www.sciencedirect.com/science/article/pii/S240589631930775X</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1016/j.ifacol.2019.09.145" target="_blank" >10.1016/j.ifacol.2019.09.145</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Proxy Functions for Approximate Reinforcement Learning
Popis výsledku v původním jazyce
Approximate Reinforcement Learning (RL) is a method to solve sequential decisionmaking and dynamic control problems in an optimal way. This paper addresses RL for continuous state spaces which derive the control policy by using an approximate value function (V-function). The standard approach to derive a policy through the V-function is analogous to hill climbing: at each state the RL agent chooses the control input that maximizes the right-hand side of the Bellman equation. Although theoretically optimal, the actual control performance of this method is heavily influenced by the local smoothness of the V-function; a lack of smoothness results in undesired closed-loop behavior with input chattering or limit-cycles. To circumvent these problems, this paper provides a method based on Symbolic Regression to generate a locally smooth proxy to the V-function. The proposed method has been evaluated on two nonlinear control benchmarks: pendulum swing-up and magnetic manipulation. The new method has been compared with the standard policy derivation technique using the approximate V-function and the results show that the proposed approach outperforms the standard one with respect to the cumulative return.
Název v anglickém jazyce
Proxy Functions for Approximate Reinforcement Learning
Popis výsledku anglicky
Approximate Reinforcement Learning (RL) is a method to solve sequential decisionmaking and dynamic control problems in an optimal way. This paper addresses RL for continuous state spaces which derive the control policy by using an approximate value function (V-function). The standard approach to derive a policy through the V-function is analogous to hill climbing: at each state the RL agent chooses the control input that maximizes the right-hand side of the Bellman equation. Although theoretically optimal, the actual control performance of this method is heavily influenced by the local smoothness of the V-function; a lack of smoothness results in undesired closed-loop behavior with input chattering or limit-cycles. To circumvent these problems, this paper provides a method based on Symbolic Regression to generate a locally smooth proxy to the V-function. The proposed method has been evaluated on two nonlinear control benchmarks: pendulum swing-up and magnetic manipulation. The new method has been compared with the standard policy derivation technique using the approximate V-function and the results show that the proposed approach outperforms the standard one with respect to the cumulative return.
Klasifikace
Druh
D - Stať ve sborníku
CEP obor
—
OECD FORD obor
20204 - Robotics and automatic control
Návaznosti výsledku
Projekt
Výsledek vznikl pri realizaci vícero projektů. Více informací v záložce Projekty.
Návaznosti
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Ostatní
Rok uplatnění
2019
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název statě ve sborníku
IFAC-PapersOnLine - 5th IFAC Conference on Intelligent Control and Automation Sciences ICONS 2019
ISBN
—
ISSN
2405-8963
e-ISSN
2405-8963
Počet stran výsledku
6
Strana od-do
224-229
Název nakladatele
Elsevier
Místo vydání
Lausanne
Místo konání akce
Belfast
Datum konání akce
21. 8. 2019
Typ akce podle státní příslušnosti
WRD - Celosvětová akce
Kód UT WoS článku
000493064700039