All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

Reaching development through visuo-proprioceptive-tactile integration on a humanoid robot - A deep learning approach

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21230%2F19%3A00336243" target="_blank" >RIV/68407700:21230/19:00336243 - isvavai.cz</a>

  • Result on the web

    <a href="http://dx.doi.org/10.1109/DEVLRN.2019.8850681" target="_blank" >http://dx.doi.org/10.1109/DEVLRN.2019.8850681</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.1109/DEVLRN.2019.8850681" target="_blank" >10.1109/DEVLRN.2019.8850681</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    Reaching development through visuo-proprioceptive-tactile integration on a humanoid robot - A deep learning approach

  • Original language description

    The development of reaching in infants has been studied for nearly nine decades. Originally, it was thought that early reaching is visually guided, but more recent evidence is suggestive of 'visually elicited' reaching, i.e. infant is gazing at the object rather than its hand during the reaching movement. The importance of haptic feedback has also been emphasized. Inspired by these findings, in this work we use the simulated iCub humanoid robot to construct a model of reaching development. The robot is presented with different objects, gazes at them, and performs motor babbling with one of its arms. Successful contacts with the object are detected through tactile sensors on hand and forearm. Such events serve as the training set, constituted by images from the robot's two eyes, head joints, tactile activation, and arm joints. A deep neural network is trained with images and head joints as inputs and arm configuration and touch as output. After learning, the network can successfully infer arm configurations that would result in a successful reach, together with prediction of tactile activation (i.e. which body part would make contact). Our main contribution is twofold: (i) our pipeline is end-to-end from stereo images and head joints (6 DoF) to armtorso configurations (10 DoF) and tactile activations, without any preprocessing, explicit coordinate transformations etc.; (ii) unique to this approach, reaches with multiple effectors corresponding to different regions of the sensitive skin are possible.

  • Czech name

  • Czech description

Classification

  • Type

    D - Article in proceedings

  • CEP classification

  • OECD FORD branch

    50103 - Cognitive sciences

Result continuities

  • Project

    <a href="/en/project/GJ17-15697Y" target="_blank" >GJ17-15697Y: Robot self-calibration and safe physical human-robot interaction inspired by body representations in primate brains</a><br>

  • Continuities

    P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)

Others

  • Publication year

    2019

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Article name in the collection

    Proceedings of the 2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)

  • ISBN

    978-1-5386-8128-2

  • ISSN

    2161-9484

  • e-ISSN

    2161-9484

  • Number of pages

    8

  • Pages from-to

    163-170

  • Publisher name

    IEEE

  • Place of publication

    Anchorage, Alaska

  • Event location

    Oslo

  • Event date

    Aug 19, 2019

  • Type of event by nationality

    WRD - Celosvětová akce

  • UT code for WoS article