H-Patches: A Benchmark and Evaluation of Handcrafted and Learned Local Descriptors
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21230%2F20%3A00344086" target="_blank" >RIV/68407700:21230/20:00344086 - isvavai.cz</a>
Výsledek na webu
<a href="https://doi.org/10.1109/TPAMI.2019.2915233" target="_blank" >https://doi.org/10.1109/TPAMI.2019.2915233</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1109/TPAMI.2019.2915233" target="_blank" >10.1109/TPAMI.2019.2915233</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
H-Patches: A Benchmark and Evaluation of Handcrafted and Learned Local Descriptors
Popis výsledku v původním jazyce
In this paper, a novel benchmark is introduced for evaluating local image descriptors. We demonstrate limitations of the commonly used datasets and evaluation protocols, that lead to ambiguities and contradictory results in the literature. Furthermore, these benchmarks are nearly saturated due to the recent improvements in local descriptors obtained by learning from large annotated datasets. To address these issues, we introduce a new large dataset suitable for training and testing modern descriptors, together with strictly defined evaluation protocols in several tasks such as matching, retrieval and verification. This allows for more realistic, thus more reliable comparisons in different application scenarios. We evaluate the performance of several state-of-the-art descriptors and analyse their properties. We show that a simple normalisation of traditional hand-crafted descriptors is able to boost their performance to the level of deep learning based descriptors once realistic benchmarks are considered. Additionally we specify a protocol for learning and evaluating using cross validation. We show that when training state-of-the-art descriptors on this dataset, the traditional verification task is almost entirely saturated.
Název v anglickém jazyce
H-Patches: A Benchmark and Evaluation of Handcrafted and Learned Local Descriptors
Popis výsledku anglicky
In this paper, a novel benchmark is introduced for evaluating local image descriptors. We demonstrate limitations of the commonly used datasets and evaluation protocols, that lead to ambiguities and contradictory results in the literature. Furthermore, these benchmarks are nearly saturated due to the recent improvements in local descriptors obtained by learning from large annotated datasets. To address these issues, we introduce a new large dataset suitable for training and testing modern descriptors, together with strictly defined evaluation protocols in several tasks such as matching, retrieval and verification. This allows for more realistic, thus more reliable comparisons in different application scenarios. We evaluate the performance of several state-of-the-art descriptors and analyse their properties. We show that a simple normalisation of traditional hand-crafted descriptors is able to boost their performance to the level of deep learning based descriptors once realistic benchmarks are considered. Additionally we specify a protocol for learning and evaluating using cross validation. We show that when training state-of-the-art descriptors on this dataset, the traditional verification task is almost entirely saturated.
Klasifikace
Druh
J<sub>imp</sub> - Článek v periodiku v databázi Web of Science
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
—
Návaznosti
I - Institucionalni podpora na dlouhodoby koncepcni rozvoj vyzkumne organizace
Ostatní
Rok uplatnění
2020
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název periodika
IEEE Transactions on Pattern Analysis and Machine Intelligence
ISSN
0162-8828
e-ISSN
1939-3539
Svazek periodika
42
Číslo periodika v rámci svazku
11
Stát vydavatele periodika
US - Spojené státy americké
Počet stran výsledku
17
Strana od-do
2825-2841
Kód UT WoS článku
000575381000007
EID výsledku v databázi Scopus
2-s2.0-85092510856