FLEPNet: Feature Level Ensemble Parallel Network for Facial Expression Recognition
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F62690094%3A18450%2F22%3A50019494" target="_blank" >RIV/62690094:18450/22:50019494 - isvavai.cz</a>
Výsledek na webu
<a href="https://ieeexplore.ieee.org/document/9896934" target="_blank" >https://ieeexplore.ieee.org/document/9896934</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1109/TAFFC.2022.3208309" target="_blank" >10.1109/TAFFC.2022.3208309</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
FLEPNet: Feature Level Ensemble Parallel Network for Facial Expression Recognition
Popis výsledku v původním jazyce
With the advent of deep learning, the research on facial expression recognition (FER) has received a lot of interest. Different deep convolutional neural network (DCNN) architectures have been developed for real-time and efficient FER. One of the challenges in FER is obtaining trustworthy features that are strongly associated with facial expression changes. Furthermore, traditional DCNNs for FER problems have two significant issues: insufficient training data, which leads to overfitting, and intra-class facial appearance variations. FLEPNet, a texture-based feature-level ensemble parallel network for FER, is proposed in this study and proved to solve the aforementioned problems. Our parallel network FLEPNet uses multi-scale convolutional and multi-scale residual block-based DCNN as building blocks. First, we consider modified homomorphic filtering to normalize the illumination effectively, which minimizes the intra-class difference. The deep networks are then protected against having insufficient training data by using texture analysis on face expression images to identify multiple attributes. Four texture features are extracted and combined with the image's original characteristics. Finally, the integrated features retrieved by two networks are used to classify seven facial expressions. Experimental results reveal that the proposed technique achieves an average accuracy of 0.9914, 0.9894, 0.9796, 0.8756, and 0.8072 on Japanese Female Facial Expressions, Extended CohnKanade, Karolinska Directed Emotional Faces, Real-world Affective Face Database, and Facial Expression Recognition 2013 databases, respectively. Moreover, experimental outcomes depict significant reliability when compared to competing approaches. IEEE
Název v anglickém jazyce
FLEPNet: Feature Level Ensemble Parallel Network for Facial Expression Recognition
Popis výsledku anglicky
With the advent of deep learning, the research on facial expression recognition (FER) has received a lot of interest. Different deep convolutional neural network (DCNN) architectures have been developed for real-time and efficient FER. One of the challenges in FER is obtaining trustworthy features that are strongly associated with facial expression changes. Furthermore, traditional DCNNs for FER problems have two significant issues: insufficient training data, which leads to overfitting, and intra-class facial appearance variations. FLEPNet, a texture-based feature-level ensemble parallel network for FER, is proposed in this study and proved to solve the aforementioned problems. Our parallel network FLEPNet uses multi-scale convolutional and multi-scale residual block-based DCNN as building blocks. First, we consider modified homomorphic filtering to normalize the illumination effectively, which minimizes the intra-class difference. The deep networks are then protected against having insufficient training data by using texture analysis on face expression images to identify multiple attributes. Four texture features are extracted and combined with the image's original characteristics. Finally, the integrated features retrieved by two networks are used to classify seven facial expressions. Experimental results reveal that the proposed technique achieves an average accuracy of 0.9914, 0.9894, 0.9796, 0.8756, and 0.8072 on Japanese Female Facial Expressions, Extended CohnKanade, Karolinska Directed Emotional Faces, Real-world Affective Face Database, and Facial Expression Recognition 2013 databases, respectively. Moreover, experimental outcomes depict significant reliability when compared to competing approaches. IEEE
Klasifikace
Druh
J<sub>imp</sub> - Článek v periodiku v databázi Web of Science
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
—
Návaznosti
S - Specificky vyzkum na vysokych skolach<br>I - Institucionalni podpora na dlouhodoby koncepcni rozvoj vyzkumne organizace
Ostatní
Rok uplatnění
2022
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název periodika
IEEE Transactions on Affective Computing
ISSN
1949-3045
e-ISSN
1949-3045
Svazek periodika
13
Číslo periodika v rámci svazku
4
Stát vydavatele periodika
US - Spojené státy americké
Počet stran výsledku
13
Strana od-do
2058-2070
Kód UT WoS článku
000892948500028
EID výsledku v databázi Scopus
2-s2.0-85139441989