Automatic Camera Calibration by Landmarks on Rigid Objects
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216305%3A26230%2F20%3APU138666" target="_blank" >RIV/00216305:26230/20:PU138666 - isvavai.cz</a>
Výsledek na webu
<a href="https://www.fit.vut.cz/research/publication/12345/" target="_blank" >https://www.fit.vut.cz/research/publication/12345/</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1007/s00138-020-01125-x" target="_blank" >10.1007/s00138-020-01125-x</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Automatic Camera Calibration by Landmarks on Rigid Objects
Popis výsledku v původním jazyce
This article presents a new method for automatic calibration of surveillance cameras. We are dealing with traffic surveillance and therefore the camera is calibrated by observing vehicles; however, other rigid objects can be used instead. The proposed method is using keypoints or landmarks automatically detected on the observed objects by a convolutional neural network. By using fine-grained recognition of the vehicles (calibration objects), and by knowing the 3D positions of the landmarks for the (very limited) set of known objects, the extracted keypoints are used for calibration of the camera, resulting in internal (focal length) and external (rotation, translation) parameters and scene scale of the surveillance camera. We collected a dataset in two parking lots and equipped it with a calibration ground truth by measuring multiple distances in the ground plane. This dataset seems to be more accurate than the existing comparable data (GT calibration error reduced from 4.62% to 0.99%). Also, the experiments show that our method overcomes the best existing alternative in terms of accuracy (error reduced from 6.56% to 4.03%) and our solution is also more flexible in terms of viewpoint change and other.
Název v anglickém jazyce
Automatic Camera Calibration by Landmarks on Rigid Objects
Popis výsledku anglicky
This article presents a new method for automatic calibration of surveillance cameras. We are dealing with traffic surveillance and therefore the camera is calibrated by observing vehicles; however, other rigid objects can be used instead. The proposed method is using keypoints or landmarks automatically detected on the observed objects by a convolutional neural network. By using fine-grained recognition of the vehicles (calibration objects), and by knowing the 3D positions of the landmarks for the (very limited) set of known objects, the extracted keypoints are used for calibration of the camera, resulting in internal (focal length) and external (rotation, translation) parameters and scene scale of the surveillance camera. We collected a dataset in two parking lots and equipped it with a calibration ground truth by measuring multiple distances in the ground plane. This dataset seems to be more accurate than the existing comparable data (GT calibration error reduced from 4.62% to 0.99%). Also, the experiments show that our method overcomes the best existing alternative in terms of accuracy (error reduced from 6.56% to 4.03%) and our solution is also more flexible in terms of viewpoint change and other.
Klasifikace
Druh
J<sub>imp</sub> - Článek v periodiku v databázi Web of Science
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
<a href="/cs/project/LQ1602" target="_blank" >LQ1602: IT4Innovations excellence in science</a><br>
Návaznosti
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Ostatní
Rok uplatnění
2020
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název periodika
Machine Vision and Applications
ISSN
0932-8092
e-ISSN
1432-1769
Svazek periodika
32
Číslo periodika v rámci svazku
1
Stát vydavatele periodika
US - Spojené státy americké
Počet stran výsledku
13
Strana od-do
2-15
Kód UT WoS článku
000575425400001
EID výsledku v databázi Scopus
2-s2.0-85091965520