Probing the Role of Positional Information in Vision-Language Models
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F22%3A10457088" target="_blank" >RIV/00216208:11320/22:10457088 - isvavai.cz</a>
Result on the web
<a href="https://aclanthology.org/2022.findings-naacl.77" target="_blank" >https://aclanthology.org/2022.findings-naacl.77</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.18653/v1/2022.findings-naacl.77" target="_blank" >10.18653/v1/2022.findings-naacl.77</a>
Alternative languages
Result language
angličtina
Original language name
Probing the Role of Positional Information in Vision-Language Models
Original language description
In most Vision-Language models (VL), the understanding of the image structure is enabled by injecting the position information (PI) about objects in the image. In our case study of LXMERT, a state-of-the-art VL model, we probe the use of the PI in the representation and study its effect on Visual Question Answering. We show that the model is not capable of leveraging the PI for the image-text matching task on a challenge set where only position differs. Yet, our experiments with probing confirm that the PI is indeed present in the representation. We introduce two strategies to tackle this: (i) Positional Information Pre-training and (ii) Contrastive Learning on PI using Cross-Modality Matching. Doing so, the model can correctly classify if images with detailed PI statements match. Additionally to the 2D information from bounding boxes, we introduce the object's depth as new feature for a better object localization in the space. Even though we were able to improve the model properties as defined by our
Czech name
—
Czech description
—
Classification
Type
D - Article in proceedings
CEP classification
—
OECD FORD branch
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Result continuities
Project
<a href="/en/project/GX19-26934X" target="_blank" >GX19-26934X: Neural Representations in Multi-modal and Multi-lingual Modeling</a><br>
Continuities
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Others
Publication year
2022
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Article name in the collection
Findings of the Association for Computational Linguistics: NAACL 2022
ISBN
978-1-955917-76-6
ISSN
—
e-ISSN
—
Number of pages
11
Pages from-to
1031-1041
Publisher name
Association for Computational Linguistics
Place of publication
Stroudsburg, PA, USA
Event location
Seattle, WA, USA
Event date
Jul 10, 2022
Type of event by nationality
WRD - Celosvětová akce
UT code for WoS article
—