Normative rule extraction from implicit learning into explicit representation
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F62690094%3A18450%2F20%3A50017203" target="_blank" >RIV/62690094:18450/20:50017203 - isvavai.cz</a>
Výsledek na webu
<a href="http://dx.doi.org/10.3233/FAIA200555" target="_blank" >http://dx.doi.org/10.3233/FAIA200555</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.3233/FAIA200555" target="_blank" >10.3233/FAIA200555</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Normative rule extraction from implicit learning into explicit representation
Popis výsledku v původním jazyce
Normative multi-agent research is an alternative viewpoint in the design of adaptive autonomous agent architecture. Norms specify the standards of behaviors such as which actions or states should be achieved or avoided. The concept of norm synthesis is the process of generating useful normative rules. This study proposes a model for normative rule extraction from implicit learning, namely using the Q-learning algorithm, into explicit norm representation by implementing Dynamic Deontics and Hierarchical Knowledge Base (HKB) to synthesize useful normative rules in the form of weighted state-action pairs with deontic modality. OpenAi Gym is used to simulate the agent environment. Our proposed model is able to generate both obligative and prohibitive norms as well as deliberate and execute said norms. Results show the generated norms are best used as prior knowledge to guide agent behavior and performs poorly if not complemented by another agent coordination mechanism. Performance increases when using both obligation and prohibition norms, and in general, norms do speed up optimum policy reachability. © 2020 The authors and IOS Press. All rights reserved.
Název v anglickém jazyce
Normative rule extraction from implicit learning into explicit representation
Popis výsledku anglicky
Normative multi-agent research is an alternative viewpoint in the design of adaptive autonomous agent architecture. Norms specify the standards of behaviors such as which actions or states should be achieved or avoided. The concept of norm synthesis is the process of generating useful normative rules. This study proposes a model for normative rule extraction from implicit learning, namely using the Q-learning algorithm, into explicit norm representation by implementing Dynamic Deontics and Hierarchical Knowledge Base (HKB) to synthesize useful normative rules in the form of weighted state-action pairs with deontic modality. OpenAi Gym is used to simulate the agent environment. Our proposed model is able to generate both obligative and prohibitive norms as well as deliberate and execute said norms. Results show the generated norms are best used as prior knowledge to guide agent behavior and performs poorly if not complemented by another agent coordination mechanism. Performance increases when using both obligation and prohibition norms, and in general, norms do speed up optimum policy reachability. © 2020 The authors and IOS Press. All rights reserved.
Klasifikace
Druh
D - Stať ve sborníku
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
—
Návaznosti
I - Institucionalni podpora na dlouhodoby koncepcni rozvoj vyzkumne organizace
Ostatní
Rok uplatnění
2020
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název statě ve sborníku
Frontiers in Artificial Intelligence and Applications
ISBN
978-1-64368-114-6
ISSN
0922-6389
e-ISSN
—
Počet stran výsledku
14
Strana od-do
88-101
Název nakladatele
IOS Press BV
Místo vydání
Amsterdam
Místo konání akce
Japonsko
Datum konání akce
22. 10. 2020
Typ akce podle státní příslušnosti
WRD - Celosvětová akce
Kód UT WoS článku
—