All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

Visual Navigation in Real-World Indoor Environments Using End-to-End Deep Reinforcement Learning

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21230%2F21%3A00350230" target="_blank" >RIV/68407700:21230/21:00350230 - isvavai.cz</a>

  • Alternative codes found

    RIV/68407700:21730/21:00350230

  • Result on the web

    <a href="https://doi.org/10.1109/LRA.2021.3068106" target="_blank" >https://doi.org/10.1109/LRA.2021.3068106</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.1109/LRA.2021.3068106" target="_blank" >10.1109/LRA.2021.3068106</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    Visual Navigation in Real-World Indoor Environments Using End-to-End Deep Reinforcement Learning

  • Original language description

    Visual navigation is essential for many applications in robotics, from manipulation, through mobile robotics to automated driving. Deep reinforcement learning (DRL) provides an elegant map-free approach integrating image processing, localization, and planning in one module, which can be trained and therefore optimized for a given environment. However, to date, DRL-based visual navigation was validated exclusively in simulation, where the simulator provides information that is not available in the real world, e.g., the robot's position or segmentation masks. This precludes the use of the learned policy on a real robot. Therefore, we present a novel approach that enables a direct deployment of the trained policy on real robots. We have designed a new powerful simulator capable of domain randomization. To facilitate the training, we propose visual auxiliary tasks and a tailored reward scheme. The policy is fine-tuned on images collected from real-world environments. We have evaluated the method on a mobile robot in a real office environment. The training took approximately 30 hours on a single GPU. In 30 navigation experiments, the robot reached a 0.3-meter neighbourhood of the goal in more than 86.7% of cases. This result makes the proposed method directly applicable to tasks like mobile manipulation.

  • Czech name

  • Czech description

Classification

  • Type

    J<sub>imp</sub> - Article in a specialist periodical, which is included in the Web of Science database

  • CEP classification

  • OECD FORD branch

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Result continuities

  • Project

    <a href="/en/project/EF15_003%2F0000470" target="_blank" >EF15_003/0000470: Robotics 4 Industry 4.0</a><br>

  • Continuities

    P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)

Others

  • Publication year

    2021

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Name of the periodical

    IEEE Robotics and Automation Letters

  • ISSN

    2377-3766

  • e-ISSN

    2377-3766

  • Volume of the periodical

    6

  • Issue of the periodical within the volume

    3

  • Country of publishing house

    US - UNITED STATES

  • Number of pages

    8

  • Pages from-to

    4345-4352

  • UT code for WoS article

    000639767800013

  • EID of the result in the Scopus database

    2-s2.0-85103234640