Thinking Fast and Slow: Efficient Text-to-Visual Retrieval with Transformers
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21730%2F21%3A00356140" target="_blank" >RIV/68407700:21730/21:00356140 - isvavai.cz</a>
Výsledek na webu
<a href="https://doi.org/10.1109/CVPR46437.2021.00970" target="_blank" >https://doi.org/10.1109/CVPR46437.2021.00970</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1109/CVPR46437.2021.00970" target="_blank" >10.1109/CVPR46437.2021.00970</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Thinking Fast and Slow: Efficient Text-to-Visual Retrieval with Transformers
Popis výsledku v původním jazyce
Our objective is language-based search of large-scale image and video datasets. For this task, the approach that consists of independently mapping text and vision to a joint embedding space, a.k.a. dual encoders, is attractive as retrieval scales and is efficient for billions of images using approximate nearest neighbour search. An alternative approach of using vision-text transformers with cross-attention gives considerable improvements in accuracy over the joint embeddings, but is often inapplicable in practice for large-scale retrieval given the cost of the cross-attention mechanisms required for each sample at test time. This work combines the best of both worlds. We make the following three contributions. First, we equip transformerbased models with a new fine-grained cross-attention architecture, providing significant improvements in retrieval accuracy whilst preserving scalability. Second, we introduce a generic approach for combining a Fast dual encoder model with our Slow but accurate transformer-based model via distillation and re-ranking. Finally, we validate our approach on the Flickr30K image dataset where we show an increase in inference speed by several orders of magnitude while having results competitive to the state of the art. We also extend our method to the video domain, improving the state of the art on the VATEX dataset.
Název v anglickém jazyce
Thinking Fast and Slow: Efficient Text-to-Visual Retrieval with Transformers
Popis výsledku anglicky
Our objective is language-based search of large-scale image and video datasets. For this task, the approach that consists of independently mapping text and vision to a joint embedding space, a.k.a. dual encoders, is attractive as retrieval scales and is efficient for billions of images using approximate nearest neighbour search. An alternative approach of using vision-text transformers with cross-attention gives considerable improvements in accuracy over the joint embeddings, but is often inapplicable in practice for large-scale retrieval given the cost of the cross-attention mechanisms required for each sample at test time. This work combines the best of both worlds. We make the following three contributions. First, we equip transformerbased models with a new fine-grained cross-attention architecture, providing significant improvements in retrieval accuracy whilst preserving scalability. Second, we introduce a generic approach for combining a Fast dual encoder model with our Slow but accurate transformer-based model via distillation and re-ranking. Finally, we validate our approach on the Flickr30K image dataset where we show an increase in inference speed by several orders of magnitude while having results competitive to the state of the art. We also extend our method to the video domain, improving the state of the art on the VATEX dataset.
Klasifikace
Druh
D - Stať ve sborníku
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
<a href="/cs/project/EF15_003%2F0000468" target="_blank" >EF15_003/0000468: Inteligentní strojové vnímání</a><br>
Návaznosti
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Ostatní
Rok uplatnění
2021
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název statě ve sborníku
Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
ISBN
978-1-6654-4509-2
ISSN
1063-6919
e-ISSN
2575-7075
Počet stran výsledku
11
Strana od-do
9821-9831
Název nakladatele
IEEE Computer Society
Místo vydání
USA
Místo konání akce
Nashville
Datum konání akce
20. 6. 2021
Typ akce podle státní příslušnosti
WRD - Celosvětová akce
Kód UT WoS článku
000742075007087