All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

FLEPNet: Feature Level Ensemble Parallel Network for Facial Expression Recognition

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F62690094%3A18450%2F22%3A50019494" target="_blank" >RIV/62690094:18450/22:50019494 - isvavai.cz</a>

  • Result on the web

    <a href="https://ieeexplore.ieee.org/document/9896934" target="_blank" >https://ieeexplore.ieee.org/document/9896934</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.1109/TAFFC.2022.3208309" target="_blank" >10.1109/TAFFC.2022.3208309</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    FLEPNet: Feature Level Ensemble Parallel Network for Facial Expression Recognition

  • Original language description

    With the advent of deep learning, the research on facial expression recognition (FER) has received a lot of interest. Different deep convolutional neural network (DCNN) architectures have been developed for real-time and efficient FER. One of the challenges in FER is obtaining trustworthy features that are strongly associated with facial expression changes. Furthermore, traditional DCNNs for FER problems have two significant issues: insufficient training data, which leads to overfitting, and intra-class facial appearance variations. FLEPNet, a texture-based feature-level ensemble parallel network for FER, is proposed in this study and proved to solve the aforementioned problems. Our parallel network FLEPNet uses multi-scale convolutional and multi-scale residual block-based DCNN as building blocks. First, we consider modified homomorphic filtering to normalize the illumination effectively, which minimizes the intra-class difference. The deep networks are then protected against having insufficient training data by using texture analysis on face expression images to identify multiple attributes. Four texture features are extracted and combined with the image&apos;s original characteristics. Finally, the integrated features retrieved by two networks are used to classify seven facial expressions. Experimental results reveal that the proposed technique achieves an average accuracy of 0.9914, 0.9894, 0.9796, 0.8756, and 0.8072 on Japanese Female Facial Expressions, Extended CohnKanade, Karolinska Directed Emotional Faces, Real-world Affective Face Database, and Facial Expression Recognition 2013 databases, respectively. Moreover, experimental outcomes depict significant reliability when compared to competing approaches. IEEE

  • Czech name

  • Czech description

Classification

  • Type

    J<sub>imp</sub> - Article in a specialist periodical, which is included in the Web of Science database

  • CEP classification

  • OECD FORD branch

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Result continuities

  • Project

  • Continuities

    S - Specificky vyzkum na vysokych skolach<br>I - Institucionalni podpora na dlouhodoby koncepcni rozvoj vyzkumne organizace

Others

  • Publication year

    2022

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Name of the periodical

    IEEE Transactions on Affective Computing

  • ISSN

    1949-3045

  • e-ISSN

    1949-3045

  • Volume of the periodical

    13

  • Issue of the periodical within the volume

    4

  • Country of publishing house

    US - UNITED STATES

  • Number of pages

    13

  • Pages from-to

    2058-2070

  • UT code for WoS article

    000892948500028

  • EID of the result in the Scopus database

    2-s2.0-85139441989