All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

Planning and Acting in Dynamic Environments: Identifying and Avoiding Dangerous Situations

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21230%2F22%3A00363577" target="_blank" >RIV/68407700:21230/22:00363577 - isvavai.cz</a>

  • Alternative codes found

    RIV/00216208:11320/22:10437330

  • Result on the web

    <a href="https://doi.org/10.1080/0952813X.2021.1938697" target="_blank" >https://doi.org/10.1080/0952813X.2021.1938697</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.1080/0952813X.2021.1938697" target="_blank" >10.1080/0952813X.2021.1938697</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    Planning and Acting in Dynamic Environments: Identifying and Avoiding Dangerous Situations

  • Original language description

    In dynamic environments, external events might occur and modify the environment without consent of intelligent agents. Plans of the agents might hence be disrupted and, worse, the agents might end up in dead-end states and no longer be able to achieve their goals. Hence, the agents should monitor the environment during plan execution and if they encounter a dangerous situation they should (reactively) act to escape from it. In this paper, we introduce the notion of dangerous states that the agent might encounter during its plan execution in dynamic environments. We present a method for computing lower bound of dangerousness of a state after applying a sequence of actions. That method is leveraged in identifying situations in which the agent has to start acting to avoid danger. We present two types of such behaviour - purely reactive and proactive (eliminating the source of danger). The introduced concepts for planning with dangerous states are implemented and tested in two scenarios - a simple RPG-like game, called Dark Dungeon, and a platform game inspired by the Perestroika video game. The results show that reasoning with dangerous states achieves better success rate (reaching the goals) than naive planning or rule-based techniques.

  • Czech name

  • Czech description

Classification

  • Type

    J<sub>imp</sub> - Article in a specialist periodical, which is included in the Web of Science database

  • CEP classification

  • OECD FORD branch

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Result continuities

  • Project

    Result was created during the realization of more than one project. More information in the Projects tab.

  • Continuities

    P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)

Others

  • Publication year

    2022

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Name of the periodical

    Journal of Experimental and Theoretical Artificial Intelligence

  • ISSN

    0952-813X

  • e-ISSN

    1362-3079

  • Volume of the periodical

    34

  • Issue of the periodical within the volume

    6

  • Country of publishing house

    GB - UNITED KINGDOM

  • Number of pages

    24

  • Pages from-to

    925-948

  • UT code for WoS article

    000668480700001

  • EID of the result in the Scopus database

    2-s2.0-85109083631