All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

Imitrob: Imitation Learning Dataset for Training and Evaluating 6D Object Pose Estimators

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21730%2F23%3A00370713" target="_blank" >RIV/68407700:21730/23:00370713 - isvavai.cz</a>

  • Result on the web

    <a href="https://doi.org/10.1109/LRA.2023.3259735" target="_blank" >https://doi.org/10.1109/LRA.2023.3259735</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.1109/LRA.2023.3259735" target="_blank" >10.1109/LRA.2023.3259735</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    Imitrob: Imitation Learning Dataset for Training and Evaluating 6D Object Pose Estimators

  • Original language description

    This letter introduces a dataset for training and evaluating methods for 6D pose estimation of hand-held tools in task demonstrations captured by a standard RGB camera. Despite the significant progress of 6D pose estimation methods, their performance is usually limited for heavily occluded objects, which is a common case in imitation learning, where the object is typically partially occluded by the manipulating hand. Currently, there is a lack of datasets that would enable the development of robust 6D pose estimation methods for these conditions. To overcome this problem, we collect a new dataset (Imitrob) aimed at 6D pose estimation in imitation learning and other applications where a human holds a tool and performs a task. The dataset contains image sequences of nine different tools and twelve manipulation tasks with two camera viewpoints, four human subjects, and left/right hand. Each image is accompanied by an accurate ground truth measurement of the 6D object pose obtained by the HTC Vive motion tracking device. The use of the dataset is demonstrated by training and evaluating a recent 6D object pose estimation method (DOPE) in various setups.

  • Czech name

  • Czech description

Classification

  • Type

    J<sub>imp</sub> - Article in a specialist periodical, which is included in the Web of Science database

  • CEP classification

  • OECD FORD branch

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Result continuities

  • Project

    Result was created during the realization of more than one project. More information in the Projects tab.

  • Continuities

    P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)

Others

  • Publication year

    2023

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Name of the periodical

    IEEE Robotics and Automation Letters

  • ISSN

    2377-3766

  • e-ISSN

    2377-3766

  • Volume of the periodical

    8

  • Issue of the periodical within the volume

    5

  • Country of publishing house

    US - UNITED STATES

  • Number of pages

    8

  • Pages from-to

    2788-2795

  • UT code for WoS article

    000964797800003

  • EID of the result in the Scopus database

    2-s2.0-85151544096