All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

Communicating human intent to a robotic companion by multi-type gesture sentences

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21730%2F23%3A00370710" target="_blank" >RIV/68407700:21730/23:00370710 - isvavai.cz</a>

  • Result on the web

    <a href="https://doi.org/10.1109/IROS55552.2023.10341944" target="_blank" >https://doi.org/10.1109/IROS55552.2023.10341944</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.1109/IROS55552.2023.10341944" target="_blank" >10.1109/IROS55552.2023.10341944</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    Communicating human intent to a robotic companion by multi-type gesture sentences

  • Original language description

    Human-Robot collaboration in home and industrial workspaces is on the rise. However, the communication between robots and humans is a bottleneck. Although people use a combination of different types of gestures to complement speech, only a few robotic systems utilize gestures for communication. In this paper, we propose a gesture pseudo-language and show how multiple types of gestures can be combined to express human intent to a robot (i.e., expressing both the desired action and its parameters - e.g., pointing to an object and showing that the object should be emptied into a bowl). The demonstrated gestures and the perceived tabletop scene (object poses detected by CosyPose) are processed in real-time) to extract the human's intent. We utilize behavior trees to generate reactive robot behavior that handles various possible states of the world (e.g., a drawer has to be opened before an object is placed into it) and recovers from errors (e.g., when the scene changes). Furthermore, our system enables switching between direct teleoperation of the end-effector and high-level operation using the proposed gesture sentences. The system is evaluated on increasingly complex tasks using a real 7-DoF Franka Emika Panda manipulator. Controlling the robot via action gestures lowered the execution time by up to 60%, compared to direct teleoperation.

  • Czech name

  • Czech description

Classification

  • Type

    D - Article in proceedings

  • CEP classification

  • OECD FORD branch

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Result continuities

  • Project

    <a href="/en/project/GA21-31000S" target="_blank" >GA21-31000S: Multimodal representation of robotic actions and tasks applied to Learning by demonstration</a><br>

  • Continuities

    P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)

Others

  • Publication year

    2023

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Article name in the collection

    2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)

  • ISBN

    978-1-6654-9190-7

  • ISSN

    2153-0858

  • e-ISSN

    2153-0866

  • Number of pages

    7

  • Pages from-to

    9839-9845

  • Publisher name

    IEEE

  • Place of publication

    Piscataway

  • Event location

    Detroit, MA

  • Event date

    Oct 1, 2023

  • Type of event by nationality

    WRD - Celosvětová akce

  • UT code for WoS article

    001136907803094