Cheap Rendering vs. Costly Annotation: Rendered Omnidirectional Dataset of Vehicles
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216305%3A26230%2F14%3APU112006" target="_blank" >RIV/00216305:26230/14:PU112006 - isvavai.cz</a>
Výsledek na webu
<a href="http://medusa.fit.vutbr.cz/SynthCars/" target="_blank" >http://medusa.fit.vutbr.cz/SynthCars/</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1145/2643188.2643191" target="_blank" >10.1145/2643188.2643191</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Cheap Rendering vs. Costly Annotation: Rendered Omnidirectional Dataset of Vehicles
Popis výsledku v původním jazyce
Detection of vehicles in traffic surveillance needs good and large training datasets in order to achieve competitive detection rates. We are showing an approach to automatic synthesis of custom datasets, simulating various major influences: viewpoint, camera parameters, sunlight, surrounding environment, etc. Our goal is to create a competitive vehicle detector which "has not seen a real car before." We are using Blender as the modeling and rendering engine. A suitable scene graph accompanied by a set of scripts was created, that allows simple configuration of the synthesized dataset. The generator is also capable of storing rich set of metadata that are used as annotations of the synthesized images. We synthesized several experimental datasets, evaluated their statistical properties, as compared to real-life datasets. Most importantly, we trained a detector on the synthetic data. Its detection performance is comparable to a detector trained on state-of-the-art real-life dataset. Synthesis of a dataset of 10,000 images takes only several hours, which is much more efficient, compared to manual annotation, let aside the possibility of human error in annotation.
Název v anglickém jazyce
Cheap Rendering vs. Costly Annotation: Rendered Omnidirectional Dataset of Vehicles
Popis výsledku anglicky
Detection of vehicles in traffic surveillance needs good and large training datasets in order to achieve competitive detection rates. We are showing an approach to automatic synthesis of custom datasets, simulating various major influences: viewpoint, camera parameters, sunlight, surrounding environment, etc. Our goal is to create a competitive vehicle detector which "has not seen a real car before." We are using Blender as the modeling and rendering engine. A suitable scene graph accompanied by a set of scripts was created, that allows simple configuration of the synthesized dataset. The generator is also capable of storing rich set of metadata that are used as annotations of the synthesized images. We synthesized several experimental datasets, evaluated their statistical properties, as compared to real-life datasets. Most importantly, we trained a detector on the synthetic data. Its detection performance is comparable to a detector trained on state-of-the-art real-life dataset. Synthesis of a dataset of 10,000 images takes only several hours, which is much more efficient, compared to manual annotation, let aside the possibility of human error in annotation.
Klasifikace
Druh
D - Stať ve sborníku
CEP obor
—
OECD FORD obor
20206 - Computer hardware and architecture
Návaznosti výsledku
Projekt
Výsledek vznikl pri realizaci vícero projektů. Více informací v záložce Projekty.
Návaznosti
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Ostatní
Rok uplatnění
2014
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název statě ve sborníku
Proceedings of Spring Conference on Computer Graphics
ISBN
978-80-223-3601-7
ISSN
—
e-ISSN
—
Počet stran výsledku
8
Strana od-do
105-112
Název nakladatele
Comenius University in Bratislava
Místo vydání
Smolenice
Místo konání akce
Smolenice
Datum konání akce
27. 5. 2014
Typ akce podle státní příslušnosti
EUR - Evropská akce
Kód UT WoS článku
—