All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

Vid2Seq: Large-Scale Pretraining of a Visual Language Model for Dense Video Captioning

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21730%2F23%3A00371981" target="_blank" >RIV/68407700:21730/23:00371981 - isvavai.cz</a>

  • Result on the web

    <a href="https://doi.org/10.1109/CVPR52729.2023.01032" target="_blank" >https://doi.org/10.1109/CVPR52729.2023.01032</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.1109/CVPR52729.2023.01032" target="_blank" >10.1109/CVPR52729.2023.01032</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    Vid2Seq: Large-Scale Pretraining of a Visual Language Model for Dense Video Captioning

  • Original language description

    In this work, we introduce Vid2Seq, a multi-modal single-stage dense event captioning model pretrained on narrated videos which are readily-available at scale. The Vid2Seq architecture augments a language model with special time tokens, allowing it to seamlessly predict event boundaries and textual descriptions in the same output sequence. Such a unified model requires large-scale training data, which is not available in current annotated datasets. We show that it is possible to leverage unlabeled narrated videos for dense video captioning, by reformulating sentence boundaries of transcribed speech as pseudo event boundaries, and using the transcribed speech sentences as pseudo event captions. The resulting Vid2Seq model pretrained on the YT-Temporal-1B dataset improves the state of the art on a variety of dense video captioning benchmarks including YouCook2, ViTT and ActivityNet Captions. Vid2Seq also generalizes well to the tasks of video paragraph captioning and video clip captioning, and to few-shot settings. Our code is publicly available at [1].

  • Czech name

  • Czech description

Classification

  • Type

    D - Article in proceedings

  • CEP classification

  • OECD FORD branch

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Result continuities

  • Project

    <a href="/en/project/EF15_003%2F0000468" target="_blank" >EF15_003/0000468: Intelligent Machine Perception</a><br>

  • Continuities

    P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)

Others

  • Publication year

    2023

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Article name in the collection

    Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

  • ISBN

    979-8-3503-0130-4

  • ISSN

    1063-6919

  • e-ISSN

    2575-7075

  • Number of pages

    13

  • Pages from-to

    10714-10726

  • Publisher name

    IEEE Computer Society

  • Place of publication

    USA

  • Event location

    Vancouver

  • Event date

    Jun 18, 2023

  • Type of event by nationality

    WRD - Celosvětová akce

  • UT code for WoS article

    001062522103003