All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

A theoretical demonstration for reinforcement learning of PI control dynamics for optimal speed control of DC motors by using Twin Delay Deep Deterministic Policy Gradient Algorithm

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216305%3A26220%2F23%3APU145934" target="_blank" >RIV/00216305:26220/23:PU145934 - isvavai.cz</a>

  • Result on the web

    <a href="https://www.sciencedirect.com/science/article/pii/S0957417422022102" target="_blank" >https://www.sciencedirect.com/science/article/pii/S0957417422022102</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.1016/j.eswa.2022.119192" target="_blank" >10.1016/j.eswa.2022.119192</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    A theoretical demonstration for reinforcement learning of PI control dynamics for optimal speed control of DC motors by using Twin Delay Deep Deterministic Policy Gradient Algorithm

  • Original language description

    To benefit from the advantages of Reinforcement Learning (RL) in industrial control applications, RL methods can be used for optimal tuning of the classical controllers based on the simulation scenarios of operating con-ditions. In this study, the Twin Delay Deep Deterministic (TD3) policy gradient method, which is an effective actor-critic RL strategy, is implemented to learn optimal Proportional Integral (PI) controller dynamics from a Direct Current (DC) motor speed control simulation environment. For this purpose, the PI controller dynamics are introduced to the actor-network by using the PI-based observer states from the control simulation envi-ronment. A suitable Simulink simulation environment is adapted to perform the training process of the TD3 algorithm. The actor-network learns the optimal PI controller dynamics by using the reward mechanism that implements the minimization of the optimal control objective function. A setpoint filter is used to describe the desired setpoint response, and step disturbance signals with random amplitude are incorporated in the simu-lation environment to improve disturbance rejection control skills with the help of experience based learning in the designed control simulation environment. When the training task is completed, the optimal PI controller coefficients are obtained from the weight coefficients of the actor-network. The performance of the optimal PI dynamics, which were learned by using the TD3 algorithm and Deep Deterministic Policy Gradient algorithm, are compared. Moreover, control performance improvement of this RL based PI controller tuning method (RL-PI) is demonstrated relative to performances of both integer and fractional order PI controllers that were tuned by using several popular metaheuristic optimization algorithms such as Genetic Algorithm, Particle Swarm Opti-mization, Grey Wolf Optimization and Differential Evolution.

  • Czech name

  • Czech description

Classification

  • Type

    J<sub>imp</sub> - Article in a specialist periodical, which is included in the Web of Science database

  • CEP classification

  • OECD FORD branch

    20204 - Robotics and automatic control

Result continuities

  • Project

  • Continuities

    S - Specificky vyzkum na vysokych skolach

Others

  • Publication year

    2023

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Name of the periodical

    EXPERT SYSTEMS WITH APPLICATIONS

  • ISSN

    0957-4174

  • e-ISSN

    1873-6793

  • Volume of the periodical

    213,Part C

  • Issue of the periodical within the volume

    March 2023

  • Country of publishing house

    US - UNITED STATES

  • Number of pages

    16

  • Pages from-to

    1-16

  • UT code for WoS article

    000890664400010

  • EID of the result in the Scopus database

    2-s2.0-85141914275