Reaching development through visuo-proprioceptive-tactile integration on a humanoid robot - A deep learning approach
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21230%2F19%3A00336243" target="_blank" >RIV/68407700:21230/19:00336243 - isvavai.cz</a>
Výsledek na webu
<a href="http://dx.doi.org/10.1109/DEVLRN.2019.8850681" target="_blank" >http://dx.doi.org/10.1109/DEVLRN.2019.8850681</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1109/DEVLRN.2019.8850681" target="_blank" >10.1109/DEVLRN.2019.8850681</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Reaching development through visuo-proprioceptive-tactile integration on a humanoid robot - A deep learning approach
Popis výsledku v původním jazyce
The development of reaching in infants has been studied for nearly nine decades. Originally, it was thought that early reaching is visually guided, but more recent evidence is suggestive of 'visually elicited' reaching, i.e. infant is gazing at the object rather than its hand during the reaching movement. The importance of haptic feedback has also been emphasized. Inspired by these findings, in this work we use the simulated iCub humanoid robot to construct a model of reaching development. The robot is presented with different objects, gazes at them, and performs motor babbling with one of its arms. Successful contacts with the object are detected through tactile sensors on hand and forearm. Such events serve as the training set, constituted by images from the robot's two eyes, head joints, tactile activation, and arm joints. A deep neural network is trained with images and head joints as inputs and arm configuration and touch as output. After learning, the network can successfully infer arm configurations that would result in a successful reach, together with prediction of tactile activation (i.e. which body part would make contact). Our main contribution is twofold: (i) our pipeline is end-to-end from stereo images and head joints (6 DoF) to armtorso configurations (10 DoF) and tactile activations, without any preprocessing, explicit coordinate transformations etc.; (ii) unique to this approach, reaches with multiple effectors corresponding to different regions of the sensitive skin are possible.
Název v anglickém jazyce
Reaching development through visuo-proprioceptive-tactile integration on a humanoid robot - A deep learning approach
Popis výsledku anglicky
The development of reaching in infants has been studied for nearly nine decades. Originally, it was thought that early reaching is visually guided, but more recent evidence is suggestive of 'visually elicited' reaching, i.e. infant is gazing at the object rather than its hand during the reaching movement. The importance of haptic feedback has also been emphasized. Inspired by these findings, in this work we use the simulated iCub humanoid robot to construct a model of reaching development. The robot is presented with different objects, gazes at them, and performs motor babbling with one of its arms. Successful contacts with the object are detected through tactile sensors on hand and forearm. Such events serve as the training set, constituted by images from the robot's two eyes, head joints, tactile activation, and arm joints. A deep neural network is trained with images and head joints as inputs and arm configuration and touch as output. After learning, the network can successfully infer arm configurations that would result in a successful reach, together with prediction of tactile activation (i.e. which body part would make contact). Our main contribution is twofold: (i) our pipeline is end-to-end from stereo images and head joints (6 DoF) to armtorso configurations (10 DoF) and tactile activations, without any preprocessing, explicit coordinate transformations etc.; (ii) unique to this approach, reaches with multiple effectors corresponding to different regions of the sensitive skin are possible.
Klasifikace
Druh
D - Stať ve sborníku
CEP obor
—
OECD FORD obor
50103 - Cognitive sciences
Návaznosti výsledku
Projekt
<a href="/cs/project/GJ17-15697Y" target="_blank" >GJ17-15697Y: Automatická kalibrace robotů a bezpečná fyzická interakce s člověkem inspirovaná reprezentacemi těla v mozku primátů</a><br>
Návaznosti
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Ostatní
Rok uplatnění
2019
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název statě ve sborníku
Proceedings of the 2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)
ISBN
978-1-5386-8128-2
ISSN
2161-9484
e-ISSN
2161-9484
Počet stran výsledku
8
Strana od-do
163-170
Název nakladatele
IEEE
Místo vydání
Anchorage, Alaska
Místo konání akce
Oslo
Datum konání akce
19. 8. 2019
Typ akce podle státní příslušnosti
WRD - Celosvětová akce
Kód UT WoS článku
—