Using Q-Learning with LWR in continuous space
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216305%3A26210%2F03%3APU38794" target="_blank" >RIV/00216305:26210/03:PU38794 - isvavai.cz</a>
Alternative codes found
RIV/61388998:_____/03:51030193
Result on the web
—
DOI - Digital Object Identifier
—
Alternative languages
Result language
angličtina
Original language name
Using Q-Learning with LWR in continuous space
Original language description
Locally Weighted Learning (LWR) is a class of approximations, based on a local model. In this paper we demonstrate using LWR together with Q-learning for control tasks. Q-learning is the most effective and popular algorithm which belongs to the Reinforcement Learning algorithms group. This algorithm works with rewards and penalties. The most common representation of Q-function is the table. The table must be replaced by suitable approximator if use of continuous states is required. LWR is one of possiblle approximators. To get the first impression on application of LWR together with modified Q-learning for the control task a simple model of inverted pendulum was created and proposed method was applied on this model.
Czech name
—
Czech description
—
Classification
Type
D - Article in proceedings
CEP classification
BC - Theory and management systems
OECD FORD branch
—
Result continuities
Project
—
Continuities
Z - Vyzkumny zamer (s odkazem do CEZ)
Others
Publication year
2003
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Article name in the collection
Proceedings of 6th international symposium on Mechatronics
ISBN
80-88914-92-2
ISSN
—
e-ISSN
—
Number of pages
4
Pages from-to
58-61
Publisher name
Alexander Dubček University of Trenčí, Faculty of Mechatronics
Place of publication
Trenčín
Event location
Trenčanské Teplice, SLOVAKIA
Event date
Jun 18, 2003
Type of event by nationality
WRD - Celosvětová akce
UT code for WoS article
—