MeshLoc: Mesh-Based Visual Localization
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21230%2F22%3A00360761" target="_blank" >RIV/68407700:21230/22:00360761 - isvavai.cz</a>
Nalezeny alternativní kódy
RIV/68407700:21730/22:00360761
Výsledek na webu
<a href="https://doi.org/10.1007/978-3-031-20047-2_34" target="_blank" >https://doi.org/10.1007/978-3-031-20047-2_34</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1007/978-3-031-20047-2_34" target="_blank" >10.1007/978-3-031-20047-2_34</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
MeshLoc: Mesh-Based Visual Localization
Popis výsledku v původním jazyce
Visual localization, i.e., the problem of camera pose estimation, is a central component of applications such as autonomous robots and augmented reality systems. A dominant approach in the literature, shown to scale to large scenes and to handle complex illumination and seasonal changes, is based on local features extracted from images. The scene representation is a sparse Structure-from-Motion point cloud that is tied to a specific local feature. Switching to another feature type requires an expensive feature matching step between the database images used to construct the point cloud. In this work, we thus explore a more flexible alternative based on dense 3D meshes that does not require features matching between database images to build the scene representation. We show that this approach can achieve state-of-the-art results. We further show that surprisingly competitive results can be obtained when extracting features on renderings of these meshes, without any neural rendering stage, and even when rendering raw scene geometry without color or texture. Our results show that dense 3D model-based representations are a promising alternative to existing representations and point to interesting and challenging directions for future research.
Název v anglickém jazyce
MeshLoc: Mesh-Based Visual Localization
Popis výsledku anglicky
Visual localization, i.e., the problem of camera pose estimation, is a central component of applications such as autonomous robots and augmented reality systems. A dominant approach in the literature, shown to scale to large scenes and to handle complex illumination and seasonal changes, is based on local features extracted from images. The scene representation is a sparse Structure-from-Motion point cloud that is tied to a specific local feature. Switching to another feature type requires an expensive feature matching step between the database images used to construct the point cloud. In this work, we thus explore a more flexible alternative based on dense 3D meshes that does not require features matching between database images to build the scene representation. We show that this approach can achieve state-of-the-art results. We further show that surprisingly competitive results can be obtained when extracting features on renderings of these meshes, without any neural rendering stage, and even when rendering raw scene geometry without color or texture. Our results show that dense 3D model-based representations are a promising alternative to existing representations and point to interesting and challenging directions for future research.
Klasifikace
Druh
D - Stať ve sborníku
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
Výsledek vznikl pri realizaci vícero projektů. Více informací v záložce Projekty.
Návaznosti
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)<br>S - Specificky vyzkum na vysokych skolach
Ostatní
Rok uplatnění
2022
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název statě ve sborníku
Computer Vision - ECCV 2022, Part XXII
ISBN
978-3-031-20046-5
ISSN
0302-9743
e-ISSN
1611-3349
Počet stran výsledku
21
Strana od-do
589-609
Název nakladatele
Springer, Cham
Místo vydání
—
Místo konání akce
Tel Aviv
Datum konání akce
23. 10. 2022
Typ akce podle státní příslušnosti
WRD - Celosvětová akce
Kód UT WoS článku
000904116000034