Back to the Feature: Learning Robust Camera Localization from Pixels to Pose
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21730%2F21%3A00356128" target="_blank" >RIV/68407700:21730/21:00356128 - isvavai.cz</a>
Výsledek na webu
<a href="https://doi.org/10.1109/CVPR46437.2021.00326" target="_blank" >https://doi.org/10.1109/CVPR46437.2021.00326</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1109/CVPR46437.2021.00326" target="_blank" >10.1109/CVPR46437.2021.00326</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Back to the Feature: Learning Robust Camera Localization from Pixels to Pose
Popis výsledku v původním jazyce
Camera pose estimation in known scenes is a 3D geometry task recently tackled by multiple learning algorithms. Many regress precise geometric quantities, like poses or 3D points, from an input image. This either fails to generalize to new viewpoints or ties the model parameters to a specific scene. In this paper, we go Back to the Feature: we argue that deep networks should focus on learning robust and invariant visual features, while the geometric estimation should be left to principled algorithms. We introduce PixLoc, a scene-agnostic neural network that estimates an accurate 6-DoF pose from an image and a 3D model. Our approach is based on the direct alignment of multiscale deep features, casting camera localization as metric learning. PixLoc learns strong data priors by end-to-end training from pixels to pose and exhibits exceptional generalization to new scenes by separating model parameters and scene geometry. The system can localize in large environments given coarse pose priors but also improve the accuracy of sparse feature matching by jointly refining keypoints and poses with little overhead. The code will be publicly available at github.com/cvg/pixloc.
Název v anglickém jazyce
Back to the Feature: Learning Robust Camera Localization from Pixels to Pose
Popis výsledku anglicky
Camera pose estimation in known scenes is a 3D geometry task recently tackled by multiple learning algorithms. Many regress precise geometric quantities, like poses or 3D points, from an input image. This either fails to generalize to new viewpoints or ties the model parameters to a specific scene. In this paper, we go Back to the Feature: we argue that deep networks should focus on learning robust and invariant visual features, while the geometric estimation should be left to principled algorithms. We introduce PixLoc, a scene-agnostic neural network that estimates an accurate 6-DoF pose from an image and a 3D model. Our approach is based on the direct alignment of multiscale deep features, casting camera localization as metric learning. PixLoc learns strong data priors by end-to-end training from pixels to pose and exhibits exceptional generalization to new scenes by separating model parameters and scene geometry. The system can localize in large environments given coarse pose priors but also improve the accuracy of sparse feature matching by jointly refining keypoints and poses with little overhead. The code will be publicly available at github.com/cvg/pixloc.
Klasifikace
Druh
D - Stať ve sborníku
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
<a href="/cs/project/EF15_003%2F0000468" target="_blank" >EF15_003/0000468: Inteligentní strojové vnímání</a><br>
Návaznosti
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Ostatní
Rok uplatnění
2021
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název statě ve sborníku
Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
ISBN
978-1-6654-4510-8
ISSN
1063-6919
e-ISSN
2575-7075
Počet stran výsledku
11
Strana od-do
3246-3256
Název nakladatele
IEEE Computer Society
Místo vydání
USA
Místo konání akce
Nashville
Datum konání akce
20. 6. 2021
Typ akce podle státní příslušnosti
WRD - Celosvětová akce
Kód UT WoS článku
000739917303044