Contrastive Learning for Image Registration in Visual Teach and Repeat Navigation
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21230%2F22%3A00357942" target="_blank" >RIV/68407700:21230/22:00357942 - isvavai.cz</a>
Výsledek na webu
<a href="https://doi.org/10.3390/s22082975" target="_blank" >https://doi.org/10.3390/s22082975</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.3390/s22082975" target="_blank" >10.3390/s22082975</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Contrastive Learning for Image Registration in Visual Teach and Repeat Navigation
Popis výsledku v původním jazyce
Visual teach and repeat navigation (VT&R) is popular in robotics thanks to its simplicity and versatility. It enables mobile robots equipped with a camera to traverse learned paths without the need to create globally consistent metric maps. Although teach and repeat frameworks have been reported to be relatively robust to changing environments, they still struggle with day-to-night and seasonal changes. This paper aims to find the horizontal displacement between prerecorded and currently perceived images required to steer a robot towards the previously traversed path. We employ a fully convolutional neural network to obtain dense representations of the images that are robust to changes in the environment and variations in illumination. The proposed model achieves state-of-the-art performance on multiple datasets with seasonal and day/night variations. In addition, our experiments show that it is possible to use the model to generate additional training examples that can be used to further improve the original model's robustness. We also conducted a real-world experiment on a mobile robot to demonstrate the suitability of our method for VT&R.
Název v anglickém jazyce
Contrastive Learning for Image Registration in Visual Teach and Repeat Navigation
Popis výsledku anglicky
Visual teach and repeat navigation (VT&R) is popular in robotics thanks to its simplicity and versatility. It enables mobile robots equipped with a camera to traverse learned paths without the need to create globally consistent metric maps. Although teach and repeat frameworks have been reported to be relatively robust to changing environments, they still struggle with day-to-night and seasonal changes. This paper aims to find the horizontal displacement between prerecorded and currently perceived images required to steer a robot towards the previously traversed path. We employ a fully convolutional neural network to obtain dense representations of the images that are robust to changes in the environment and variations in illumination. The proposed model achieves state-of-the-art performance on multiple datasets with seasonal and day/night variations. In addition, our experiments show that it is possible to use the model to generate additional training examples that can be used to further improve the original model's robustness. We also conducted a real-world experiment on a mobile robot to demonstrate the suitability of our method for VT&R.
Klasifikace
Druh
J<sub>imp</sub> - Článek v periodiku v databázi Web of Science
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
Výsledek vznikl pri realizaci vícero projektů. Více informací v záložce Projekty.
Návaznosti
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Ostatní
Rok uplatnění
2022
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název periodika
Sensors
ISSN
1424-8220
e-ISSN
1424-8220
Svazek periodika
22
Číslo periodika v rámci svazku
8
Stát vydavatele periodika
CH - Švýcarská konfederace
Počet stran výsledku
19
Strana od-do
—
Kód UT WoS článku
000787016500001
EID výsledku v databázi Scopus
2-s2.0-85128241406