Influence of Late Fusion of High-Level Features on User Relevance Feedback for Videos
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21730%2F22%3A00364319" target="_blank" >RIV/68407700:21730/22:00364319 - isvavai.cz</a>
Result on the web
<a href="https://doi.org/10.1145/3552467.3554795" target="_blank" >https://doi.org/10.1145/3552467.3554795</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1145/3552467.3554795" target="_blank" >10.1145/3552467.3554795</a>
Alternative languages
Result language
angličtina
Original language name
Influence of Late Fusion of High-Level Features on User Relevance Feedback for Videos
Original language description
Content-based media retrieval relies on multimodal data representations. For videos, these representations mainly focus on the textual, visual, and audio modalities. While the modality representations can be used individually, combining their information can improve the overall retrieval experience. For video collections, retrieval focuses on either finding a full length video or specific segment(s) from one or more videos. For the former, the textual metadata along with broad descriptions of the contents are useful. For the latter, visual and audio modality representations are preferable as they represent the contents of specific segments in videos. Interactive learning approaches, such as user relevance feedback, have shown promising results when solving exploration and search tasks in larger collections. When combining modality representations in user relevance feedback, often a form of late modality fusion method is applied. While this generally tends to improve retrieval, its performance for video collections with multiple modality representations of high-level features, is not well known. In this study we analyse the effects of late fusion using high-level features, such as semantic concepts, actions, scenes, and audio. From our experiments on three video datasets, V3C1, Charades, and VGG-Sound, we show that fusion works well, but depending on the task or dataset, excluding one or more modalities can improve results. When it is clear that a modality is better for a task, setting a preference to enhance that modality's influence in the fusion process can also be greatly beneficial. Furthermore, we show that mixing fusion results and results from individual modalities can be better than only performing fusion.
Czech name
—
Czech description
—
Classification
Type
D - Article in proceedings
CEP classification
—
OECD FORD branch
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Result continuities
Project
<a href="/en/project/EF15_003%2F0000470" target="_blank" >EF15_003/0000470: Robotics 4 Industry 4.0</a><br>
Continuities
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Others
Publication year
2022
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Article name in the collection
IMuR '22: Proceedings of the 2nd International Workshop on Interactive Multimedia Retrieval
ISBN
978-1-4503-9497-0
ISSN
—
e-ISSN
—
Number of pages
8
Pages from-to
17-24
Publisher name
Association for Computing Machinery
Place of publication
New York
Event location
Lisabon
Event date
Oct 10, 2022
Type of event by nationality
WRD - Celosvětová akce
UT code for WoS article
—