Neuro-Symbolic Reasoning for Multimodal Referring Expression Comprehension in HMI Systems
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F25%3AJVQ7BT9M" target="_blank" >RIV/00216208:11320/25:JVQ7BT9M - isvavai.cz</a>
Result on the web
<a href="https://www.scopus.com/inward/record.uri?eid=2-s2.0-85185111766&doi=10.1007%2fs00354-024-00243-8&partnerID=40&md5=11aea793523eb72cd189fd89f01fa90f" target="_blank" >https://www.scopus.com/inward/record.uri?eid=2-s2.0-85185111766&doi=10.1007%2fs00354-024-00243-8&partnerID=40&md5=11aea793523eb72cd189fd89f01fa90f</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1007/s00354-024-00243-8" target="_blank" >10.1007/s00354-024-00243-8</a>
Alternative languages
Result language
angličtina
Original language name
Neuro-Symbolic Reasoning for Multimodal Referring Expression Comprehension in HMI Systems
Original language description
Conventional Human–Machine Interaction (HMI) interfaces have predominantly relied on GUI and voice commands. However, natural human communication also consists of non-verbal communication, including hand gestures like pointing. Thus, recent works in HMI systems have tried to incorporate pointing gestures as an input, making significant progress in recognizing and integrating them with voice commands. However, existing approaches often treat these input modalities independently, limiting their capacity to handle complex multimodal instructions requiring intricate reasoning of language and gestures. On the other hand, multimodal tasks requiring complex reasoning are being challenged in the language and vision domain, but these typically do not include gestures like pointing. To bridge this gap, we explore one of the challenging multimodal tasks, called Referring Expression Comprehension (REC), within multimodal HMI systems incorporating pointing gestures. We present a virtual setup in which a robot shares an environment with a user and is tasked with identifying objects based on the user’s language and gestural instructions. Furthermore, to address this challenge, we propose a hybrid neuro-symbolic model combining deep learning’s versatility with symbolic reasoning’s interpretability. Our contributions include a challenging multimodal REC dataset for HMI systems, an interpretable neuro-symbolic model, and an assessment of its ability to generalize the reasoning to unseen environments, complemented by an in-depth qualitative analysis of the model’s inner workings. © The Author(s) 2024.
Czech name
—
Czech description
—
Classification
Type
J<sub>SC</sub> - Article in a specialist periodical, which is included in the SCOPUS database
CEP classification
—
OECD FORD branch
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Result continuities
Project
—
Continuities
—
Others
Publication year
2024
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Name of the periodical
New Generation Computing
ISSN
0288-3635
e-ISSN
—
Volume of the periodical
2024
Issue of the periodical within the volume
2024
Country of publishing house
US - UNITED STATES
Number of pages
20
Pages from-to
579-598
UT code for WoS article
—
EID of the result in the Scopus database
2-s2.0-85185111766