Body models in humans and robots
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21230%2F22%3A00362226" target="_blank" >RIV/68407700:21230/22:00362226 - isvavai.cz</a>
Result on the web
<a href="https://doi.org/10.4324/9780429321542-18" target="_blank" >https://doi.org/10.4324/9780429321542-18</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.4324/9780429321542-18" target="_blank" >10.4324/9780429321542-18</a>
Alternative languages
Result language
angličtina
Original language name
Body models in humans and robots
Original language description
Humans excel in combining information from multiple sensory modalities, controlling their complex bodies, adapting to growth, failures, or using tools. These capabilities are also highly desirable in robots. They are displayed by machines to some extent – yet, as is so often the case, the artificial creatures are lagging behind. The key foundation is an internal representation of the body that the agent – human or robot – has developed. In the biological realm, evidence has been accumulated by diverse disciplines giving rise to the concepts of body image, body schema, and others. In robotics, a model of the robot is an indispensable component that enables control of the machine. In this chapter, we compare the character of body representations in biology with their robotic counterparts and relate that to the differences in performance that we observe. In some sense, robots have a lot in common with Ian Waterman – “the man who lost his body” – in that they rely on an explicit, veridical body model (body image taken to the extreme) and lack any implicit, multimodal representation (like the body schema) of their bodies. The core of this work is a detailed look at the somatoperceptual processing “pipeline” from inputs (tactile and proprioceptive afference, efferent commands), over “body representations” (superficial schema, postural schema, model of body size and shape), to perceptual processes like spatial localization of touch. A direct comparison with solutions to the same task in robots allows us to make important steps in converting this conceptual schematics into a computational model. As an additional aspect, we briefly look at the question of why robots do not experience body illusions. Finally, we discuss how robots can inform the biological sciences dealing with body representations and which of the features of the “body in the brain” should be transferred to robots, giving rise to more adaptive and resilient, self-calibrating machines.
Czech name
—
Czech description
—
Classification
Type
C - Chapter in a specialist book
CEP classification
—
OECD FORD branch
50103 - Cognitive sciences
Result continuities
Project
<a href="/en/project/GX20-24186X" target="_blank" >GX20-24186X: Whole-body awareness for safe and natural interaction: from brains to collaborative robots</a><br>
Continuities
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Others
Publication year
2022
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Book/collection name
The Routledge Handbook of Bodily Awareness
ISBN
978-0-367-33731-5
Number of pages of the result
13
Pages from-to
185-197
Number of pages of the book
570
Publisher name
ROUTLEDGE JOURNALS, TAYLOR & FRANCIS LTD
Place of publication
Oxon
UT code for WoS chapter
—