NetVLAD: CNN Architecture for Weakly Supervised Place Recognition
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21730%2F18%3A00321876" target="_blank" >RIV/68407700:21730/18:00321876 - isvavai.cz</a>
Výsledek na webu
<a href="https://ieeexplore.ieee.org/document/7937898" target="_blank" >https://ieeexplore.ieee.org/document/7937898</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1109/TPAMI.2017.2711011" target="_blank" >10.1109/TPAMI.2017.2711011</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
NetVLAD: CNN Architecture for Weakly Supervised Place Recognition
Popis výsledku v původním jazyce
We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following four principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the “Vector of Locally Aggregated Descriptors” image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we create a new weakly supervised ranking loss, which enables end-to-end learning of the architecture's parameters from images depicting the same places over time downloaded from Google Street View Time Machine. Third, we develop an efficient training procedure which can be applied on very large-scale weakly labelled tasks. Finally, we show that the proposed architecture and training procedure significantly outperform non-learnt image representations and off-the-shelf CNN descriptors on challenging place recognition and image retrieval benchmarks.
Název v anglickém jazyce
NetVLAD: CNN Architecture for Weakly Supervised Place Recognition
Popis výsledku anglicky
We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following four principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the “Vector of Locally Aggregated Descriptors” image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we create a new weakly supervised ranking loss, which enables end-to-end learning of the architecture's parameters from images depicting the same places over time downloaded from Google Street View Time Machine. Third, we develop an efficient training procedure which can be applied on very large-scale weakly labelled tasks. Finally, we show that the proposed architecture and training procedure significantly outperform non-learnt image representations and off-the-shelf CNN descriptors on challenging place recognition and image retrieval benchmarks.
Klasifikace
Druh
J<sub>imp</sub> - Článek v periodiku v databázi Web of Science
CEP obor
—
OECD FORD obor
20204 - Robotics and automatic control
Návaznosti výsledku
Projekt
<a href="/cs/project/EF15_003%2F0000468" target="_blank" >EF15_003/0000468: Inteligentní strojové vnímání</a><br>
Návaznosti
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Ostatní
Rok uplatnění
2018
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název periodika
IEEE Transactions on Pattern Analysis and Machine Intelligence
ISSN
0162-8828
e-ISSN
1939-3539
Svazek periodika
40
Číslo periodika v rámci svazku
6
Stát vydavatele periodika
US - Spojené státy americké
Počet stran výsledku
15
Strana od-do
1437-1451
Kód UT WoS článku
000431524700012
EID výsledku v databázi Scopus
2-s2.0-85021711204