Comparison of Semantic Segmentation Approaches for Horizon/Sky Line Detection
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F49777513%3A23520%2F17%3A43932070" target="_blank" >RIV/49777513:23520/17:43932070 - isvavai.cz</a>
Výsledek na webu
<a href="http://ieeexplore.ieee.org/document/7966418/" target="_blank" >http://ieeexplore.ieee.org/document/7966418/</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1109/IJCNN.2017.7966418" target="_blank" >10.1109/IJCNN.2017.7966418</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Comparison of Semantic Segmentation Approaches for Horizon/Sky Line Detection
Popis výsledku v původním jazyce
Horizon or skyline detection plays a vital role towards mountainous visual geo-localization, however most of the recently proposed visual geo-localization approaches rely on user-in-the-loop skyline detection methods. Detecting such a segmenting boundary fully autonomously would definitely be a step forward for these localization approaches. This paper provides a quantitative comparison of four such methods for autonomous horizon/sky line detection on an extensive data set. Specifically, we provide the comparison between four recently proposed segmentation methods; one explicitly targeting the problem of horizon detection[2], second focused on visual geolocalization but relying on accurate detection of skyline [15] nd other two proposed for general semantic segmentation – Fully Convolutional Networks (FCN) [21] and SegNet[22]. Each of the first two methods is trained on a common training set [11] comprised of about 200 images while models for the third and fourth method are fine tuned for sky segmentation problem through transfer learning using the same data set. Each of the method is tested on an extensive test set (about 3K images) covering various challenging geographical, weather, illumination and seasonal conditions. We report average accuracy and average absolute pixel error for each of the presented formulation.
Název v anglickém jazyce
Comparison of Semantic Segmentation Approaches for Horizon/Sky Line Detection
Popis výsledku anglicky
Horizon or skyline detection plays a vital role towards mountainous visual geo-localization, however most of the recently proposed visual geo-localization approaches rely on user-in-the-loop skyline detection methods. Detecting such a segmenting boundary fully autonomously would definitely be a step forward for these localization approaches. This paper provides a quantitative comparison of four such methods for autonomous horizon/sky line detection on an extensive data set. Specifically, we provide the comparison between four recently proposed segmentation methods; one explicitly targeting the problem of horizon detection[2], second focused on visual geolocalization but relying on accurate detection of skyline [15] nd other two proposed for general semantic segmentation – Fully Convolutional Networks (FCN) [21] and SegNet[22]. Each of the first two methods is trained on a common training set [11] comprised of about 200 images while models for the third and fourth method are fine tuned for sky segmentation problem through transfer learning using the same data set. Each of the method is tested on an extensive test set (about 3K images) covering various challenging geographical, weather, illumination and seasonal conditions. We report average accuracy and average absolute pixel error for each of the presented formulation.
Klasifikace
Druh
D - Stať ve sborníku
CEP obor
—
OECD FORD obor
20205 - Automation and control systems
Návaznosti výsledku
Projekt
<a href="/cs/project/LO1506" target="_blank" >LO1506: Podpora udržitelnosti centra NTIS - Nové technologie pro informační společnost</a><br>
Návaznosti
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Ostatní
Rok uplatnění
2017
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název statě ve sborníku
Neural Networks (IJCNN), 2017 International Joint Conference on
ISBN
978-1-5090-6182-2
ISSN
2161-4393
e-ISSN
—
Počet stran výsledku
8
Strana od-do
4436-4443
Název nakladatele
IEEE
Místo vydání
New York
Místo konání akce
Anchorage, Alaska, USA
Datum konání akce
14. 5. 2017
Typ akce podle státní příslušnosti
WRD - Celosvětová akce
Kód UT WoS článku
000426968704091