SegLoc: Learning Segmentation-Based Representations for Privacy-Preserving Visual Localization
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21230%2F23%3A00372022" target="_blank" >RIV/68407700:21230/23:00372022 - isvavai.cz</a>
Nalezeny alternativní kódy
RIV/68407700:21730/23:00372022
Výsledek na webu
<a href="https://doi.org/10.1109/CVPR52729.2023.01476" target="_blank" >https://doi.org/10.1109/CVPR52729.2023.01476</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1109/CVPR52729.2023.01476" target="_blank" >10.1109/CVPR52729.2023.01476</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
SegLoc: Learning Segmentation-Based Representations for Privacy-Preserving Visual Localization
Popis výsledku v původním jazyce
Inspired by properties of semantic segmentation, in this paper we investigate how to leverage robust image segmentation in the context of privacy-preserving visual localization. We propose a new localization framework, SegLoc, that leverages image segmentation to create robust, compact, and privacy-preserving scene representations, i.e., 3D maps. We build upon the correspondence-supervised, fine-grained segmentation approach from [42], making it more robust by learning a set of cluster labels with discriminative clustering, additional consistency regularization terms and we jointly learn a global image representation along with a dense local representation. In our localization pipeline, the former will be used for retrieving the most similar images, the latter to refine the retrieved poses by minimizing the label inconsistency between the 3D points of the map and their projection onto the query image. In various experiments, we show that our proposed representation allows to achieve (close-to) state-of-the-art pose estimation results while only using a compact 3D map that does not contain enough information about the original images for an attacker to reconstruct personal information.
Název v anglickém jazyce
SegLoc: Learning Segmentation-Based Representations for Privacy-Preserving Visual Localization
Popis výsledku anglicky
Inspired by properties of semantic segmentation, in this paper we investigate how to leverage robust image segmentation in the context of privacy-preserving visual localization. We propose a new localization framework, SegLoc, that leverages image segmentation to create robust, compact, and privacy-preserving scene representations, i.e., 3D maps. We build upon the correspondence-supervised, fine-grained segmentation approach from [42], making it more robust by learning a set of cluster labels with discriminative clustering, additional consistency regularization terms and we jointly learn a global image representation along with a dense local representation. In our localization pipeline, the former will be used for retrieving the most similar images, the latter to refine the retrieved poses by minimizing the label inconsistency between the 3D points of the map and their projection onto the query image. In various experiments, we show that our proposed representation allows to achieve (close-to) state-of-the-art pose estimation results while only using a compact 3D map that does not contain enough information about the original images for an attacker to reconstruct personal information.
Klasifikace
Druh
D - Stať ve sborníku
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
—
Návaznosti
I - Institucionalni podpora na dlouhodoby koncepcni rozvoj vyzkumne organizace
Ostatní
Rok uplatnění
2023
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název statě ve sborníku
Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
ISBN
979-8-3503-0129-8
ISSN
1063-6919
e-ISSN
2575-7075
Počet stran výsledku
12
Strana od-do
15380-15391
Název nakladatele
IEEE Computer Society
Místo vydání
USA
Místo konání akce
Vancouver
Datum konání akce
18. 6. 2023
Typ akce podle státní příslušnosti
WRD - Celosvětová akce
Kód UT WoS článku
001062522107067