Neuro-Symbolic Reasoning for Multimodal Referring Expression Comprehension in HMI Systems
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F25%3AJVQ7BT9M" target="_blank" >RIV/00216208:11320/25:JVQ7BT9M - isvavai.cz</a>
Výsledek na webu
<a href="https://www.scopus.com/inward/record.uri?eid=2-s2.0-85185111766&doi=10.1007%2fs00354-024-00243-8&partnerID=40&md5=11aea793523eb72cd189fd89f01fa90f" target="_blank" >https://www.scopus.com/inward/record.uri?eid=2-s2.0-85185111766&doi=10.1007%2fs00354-024-00243-8&partnerID=40&md5=11aea793523eb72cd189fd89f01fa90f</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1007/s00354-024-00243-8" target="_blank" >10.1007/s00354-024-00243-8</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Neuro-Symbolic Reasoning for Multimodal Referring Expression Comprehension in HMI Systems
Popis výsledku v původním jazyce
Conventional Human–Machine Interaction (HMI) interfaces have predominantly relied on GUI and voice commands. However, natural human communication also consists of non-verbal communication, including hand gestures like pointing. Thus, recent works in HMI systems have tried to incorporate pointing gestures as an input, making significant progress in recognizing and integrating them with voice commands. However, existing approaches often treat these input modalities independently, limiting their capacity to handle complex multimodal instructions requiring intricate reasoning of language and gestures. On the other hand, multimodal tasks requiring complex reasoning are being challenged in the language and vision domain, but these typically do not include gestures like pointing. To bridge this gap, we explore one of the challenging multimodal tasks, called Referring Expression Comprehension (REC), within multimodal HMI systems incorporating pointing gestures. We present a virtual setup in which a robot shares an environment with a user and is tasked with identifying objects based on the user’s language and gestural instructions. Furthermore, to address this challenge, we propose a hybrid neuro-symbolic model combining deep learning’s versatility with symbolic reasoning’s interpretability. Our contributions include a challenging multimodal REC dataset for HMI systems, an interpretable neuro-symbolic model, and an assessment of its ability to generalize the reasoning to unseen environments, complemented by an in-depth qualitative analysis of the model’s inner workings. © The Author(s) 2024.
Název v anglickém jazyce
Neuro-Symbolic Reasoning for Multimodal Referring Expression Comprehension in HMI Systems
Popis výsledku anglicky
Conventional Human–Machine Interaction (HMI) interfaces have predominantly relied on GUI and voice commands. However, natural human communication also consists of non-verbal communication, including hand gestures like pointing. Thus, recent works in HMI systems have tried to incorporate pointing gestures as an input, making significant progress in recognizing and integrating them with voice commands. However, existing approaches often treat these input modalities independently, limiting their capacity to handle complex multimodal instructions requiring intricate reasoning of language and gestures. On the other hand, multimodal tasks requiring complex reasoning are being challenged in the language and vision domain, but these typically do not include gestures like pointing. To bridge this gap, we explore one of the challenging multimodal tasks, called Referring Expression Comprehension (REC), within multimodal HMI systems incorporating pointing gestures. We present a virtual setup in which a robot shares an environment with a user and is tasked with identifying objects based on the user’s language and gestural instructions. Furthermore, to address this challenge, we propose a hybrid neuro-symbolic model combining deep learning’s versatility with symbolic reasoning’s interpretability. Our contributions include a challenging multimodal REC dataset for HMI systems, an interpretable neuro-symbolic model, and an assessment of its ability to generalize the reasoning to unseen environments, complemented by an in-depth qualitative analysis of the model’s inner workings. © The Author(s) 2024.
Klasifikace
Druh
J<sub>SC</sub> - Článek v periodiku v databázi SCOPUS
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
—
Návaznosti
—
Ostatní
Rok uplatnění
2024
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název periodika
New Generation Computing
ISSN
0288-3635
e-ISSN
—
Svazek periodika
2024
Číslo periodika v rámci svazku
2024
Stát vydavatele periodika
US - Spojené státy americké
Počet stran výsledku
20
Strana od-do
579-598
Kód UT WoS článku
—
EID výsledku v databázi Scopus
2-s2.0-85185111766