Vid2Seq: Large-Scale Pretraining of a Visual Language Model for Dense Video Captioning
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21730%2F23%3A00371981" target="_blank" >RIV/68407700:21730/23:00371981 - isvavai.cz</a>
Výsledek na webu
<a href="https://doi.org/10.1109/CVPR52729.2023.01032" target="_blank" >https://doi.org/10.1109/CVPR52729.2023.01032</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1109/CVPR52729.2023.01032" target="_blank" >10.1109/CVPR52729.2023.01032</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Vid2Seq: Large-Scale Pretraining of a Visual Language Model for Dense Video Captioning
Popis výsledku v původním jazyce
In this work, we introduce Vid2Seq, a multi-modal single-stage dense event captioning model pretrained on narrated videos which are readily-available at scale. The Vid2Seq architecture augments a language model with special time tokens, allowing it to seamlessly predict event boundaries and textual descriptions in the same output sequence. Such a unified model requires large-scale training data, which is not available in current annotated datasets. We show that it is possible to leverage unlabeled narrated videos for dense video captioning, by reformulating sentence boundaries of transcribed speech as pseudo event boundaries, and using the transcribed speech sentences as pseudo event captions. The resulting Vid2Seq model pretrained on the YT-Temporal-1B dataset improves the state of the art on a variety of dense video captioning benchmarks including YouCook2, ViTT and ActivityNet Captions. Vid2Seq also generalizes well to the tasks of video paragraph captioning and video clip captioning, and to few-shot settings. Our code is publicly available at [1].
Název v anglickém jazyce
Vid2Seq: Large-Scale Pretraining of a Visual Language Model for Dense Video Captioning
Popis výsledku anglicky
In this work, we introduce Vid2Seq, a multi-modal single-stage dense event captioning model pretrained on narrated videos which are readily-available at scale. The Vid2Seq architecture augments a language model with special time tokens, allowing it to seamlessly predict event boundaries and textual descriptions in the same output sequence. Such a unified model requires large-scale training data, which is not available in current annotated datasets. We show that it is possible to leverage unlabeled narrated videos for dense video captioning, by reformulating sentence boundaries of transcribed speech as pseudo event boundaries, and using the transcribed speech sentences as pseudo event captions. The resulting Vid2Seq model pretrained on the YT-Temporal-1B dataset improves the state of the art on a variety of dense video captioning benchmarks including YouCook2, ViTT and ActivityNet Captions. Vid2Seq also generalizes well to the tasks of video paragraph captioning and video clip captioning, and to few-shot settings. Our code is publicly available at [1].
Klasifikace
Druh
D - Stať ve sborníku
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
<a href="/cs/project/EF15_003%2F0000468" target="_blank" >EF15_003/0000468: Inteligentní strojové vnímání</a><br>
Návaznosti
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Ostatní
Rok uplatnění
2023
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název statě ve sborníku
Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
ISBN
979-8-3503-0130-4
ISSN
1063-6919
e-ISSN
2575-7075
Počet stran výsledku
13
Strana od-do
10714-10726
Název nakladatele
IEEE Computer Society
Místo vydání
USA
Místo konání akce
Vancouver
Datum konání akce
18. 6. 2023
Typ akce podle státní příslušnosti
WRD - Celosvětová akce
Kód UT WoS článku
001062522103003