Learning CNNs from Weakly Annotated Facial Images
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21230%2F18%3A00324147" target="_blank" >RIV/68407700:21230/18:00324147 - isvavai.cz</a>
Výsledek na webu
<a href="https://doi.org/10.1016/j.imavis.2018.06.011" target="_blank" >https://doi.org/10.1016/j.imavis.2018.06.011</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1016/j.imavis.2018.06.011" target="_blank" >10.1016/j.imavis.2018.06.011</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Learning CNNs from Weakly Annotated Facial Images
Popis výsledku v původním jazyce
Learning of convolutional neural networks (CNNs) to perform a face recognition task requires a large set of facial images each annotated with a label to be predicted. In this paper we propose a method for learning CNNs from weakly annotated images. The weak annotation in our setting means that a pair of an attribute label and a person identity label is assigned to a set of faces automatically detected in the image. The challenge is to link the annotation with the correct face. The weakly annotated images of this type can be collected by an automated process not requiring a human labor. We formulate learning from weakly annotated images as a maximum likelihood (ML) estimation of a parametric distribution describing the weakly annotated images. The ML problem is solved by an instance of the EM algorithm which in its inner loop learns a CNN to predict attribute label from facial images. Experiments on age and gender estimation problem show that the proposed algorithm significantly outperforms the existing heuristic approach for dealing with this type of data. A practical outcome of our paper is a new annotation of the IMDB database [26] containing 300 k faces each one annotated by biological age, gender and identity labels.
Název v anglickém jazyce
Learning CNNs from Weakly Annotated Facial Images
Popis výsledku anglicky
Learning of convolutional neural networks (CNNs) to perform a face recognition task requires a large set of facial images each annotated with a label to be predicted. In this paper we propose a method for learning CNNs from weakly annotated images. The weak annotation in our setting means that a pair of an attribute label and a person identity label is assigned to a set of faces automatically detected in the image. The challenge is to link the annotation with the correct face. The weakly annotated images of this type can be collected by an automated process not requiring a human labor. We formulate learning from weakly annotated images as a maximum likelihood (ML) estimation of a parametric distribution describing the weakly annotated images. The ML problem is solved by an instance of the EM algorithm which in its inner loop learns a CNN to predict attribute label from facial images. Experiments on age and gender estimation problem show that the proposed algorithm significantly outperforms the existing heuristic approach for dealing with this type of data. A practical outcome of our paper is a new annotation of the IMDB database [26] containing 300 k faces each one annotated by biological age, gender and identity labels.
Klasifikace
Druh
J<sub>imp</sub> - Článek v periodiku v databázi Web of Science
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
Výsledek vznikl pri realizaci vícero projektů. Více informací v záložce Projekty.
Návaznosti
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Ostatní
Rok uplatnění
2018
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název periodika
Image and Vision Computing
ISSN
0262-8856
e-ISSN
1872-8138
Svazek periodika
77
Číslo periodika v rámci svazku
September
Stát vydavatele periodika
GB - Spojené království Velké Británie a Severního Irska
Počet stran výsledku
11
Strana od-do
10-20
Kód UT WoS článku
000446282900002
EID výsledku v databázi Scopus
2-s2.0-85049924723