Video–text retrieval via multi-modal masked transformer and adaptive attribute-aware graph convolutional network
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F25%3ACCYPWNRV" target="_blank" >RIV/00216208:11320/25:CCYPWNRV - isvavai.cz</a>
Výsledek na webu
<a href="https://www.scopus.com/inward/record.uri?eid=2-s2.0-85182811160&doi=10.1007%2fs00530-023-01205-8&partnerID=40&md5=f43954a8e2452cba63564609df02082a" target="_blank" >https://www.scopus.com/inward/record.uri?eid=2-s2.0-85182811160&doi=10.1007%2fs00530-023-01205-8&partnerID=40&md5=f43954a8e2452cba63564609df02082a</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1007/s00530-023-01205-8" target="_blank" >10.1007/s00530-023-01205-8</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Video–text retrieval via multi-modal masked transformer and adaptive attribute-aware graph convolutional network
Popis výsledku v původním jazyce
Despite significant advancements in deep learning-based video–text retrieval methods, three challenges persist: the alignment of fine-grained semantic information from text and video, ensuring that the obtained textual and video feature representations capture primary semantic information while maintaining good discriminability, and measuring the semantic similarity between different instances. To tackle these issues, we introduce an end-to-end video–text retrieval framework which exploit Multi-Modal Masked Transformer and Adaptive Attribute-Aware Graph Convolutional Network (M 3 Trans-A 3 GCN). Specifically, the features extracted from videos and texts are fed into M 3 Trans to jointly integrate the multi-modal content and mask irrelevant multi-modal context. Subsequently, a novel GCN with an adaptive correlation matrix (i.e., A 3 GCN) is constructed to obtain discriminative video representation for video–text retrieval. To better measure the semantic similarity between video–text pairs during training, we propose a novel Text-semantic-guided Multi-Modal Cross-Entropy (TMCE) loss function. Here, the similarity between different video–text pairs within a batch is computed based on the features of the corresponding text rather than their instance labels. Comprehensive experimental results on three benchmark datasets, MSR-VTT, MSVD and LSMDC, demonstrate the superiority of M 3 Trans-A 3 GCN, compared with the state-of-the-art methods in video–text retrieval. © 2024, The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.
Název v anglickém jazyce
Video–text retrieval via multi-modal masked transformer and adaptive attribute-aware graph convolutional network
Popis výsledku anglicky
Despite significant advancements in deep learning-based video–text retrieval methods, three challenges persist: the alignment of fine-grained semantic information from text and video, ensuring that the obtained textual and video feature representations capture primary semantic information while maintaining good discriminability, and measuring the semantic similarity between different instances. To tackle these issues, we introduce an end-to-end video–text retrieval framework which exploit Multi-Modal Masked Transformer and Adaptive Attribute-Aware Graph Convolutional Network (M 3 Trans-A 3 GCN). Specifically, the features extracted from videos and texts are fed into M 3 Trans to jointly integrate the multi-modal content and mask irrelevant multi-modal context. Subsequently, a novel GCN with an adaptive correlation matrix (i.e., A 3 GCN) is constructed to obtain discriminative video representation for video–text retrieval. To better measure the semantic similarity between video–text pairs during training, we propose a novel Text-semantic-guided Multi-Modal Cross-Entropy (TMCE) loss function. Here, the similarity between different video–text pairs within a batch is computed based on the features of the corresponding text rather than their instance labels. Comprehensive experimental results on three benchmark datasets, MSR-VTT, MSVD and LSMDC, demonstrate the superiority of M 3 Trans-A 3 GCN, compared with the state-of-the-art methods in video–text retrieval. © 2024, The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.
Klasifikace
Druh
J<sub>SC</sub> - Článek v periodiku v databázi SCOPUS
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
—
Návaznosti
—
Ostatní
Rok uplatnění
2024
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název periodika
Multimedia Systems
ISSN
0942-4962
e-ISSN
—
Svazek periodika
30
Číslo periodika v rámci svazku
1
Stát vydavatele periodika
US - Spojené státy americké
Počet stran výsledku
12
Strana od-do
1-12
Kód UT WoS článku
—
EID výsledku v databázi Scopus
2-s2.0-85182811160