Lipreading with LipsID
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F49777513%3A23520%2F20%3A43959761" target="_blank" >RIV/49777513:23520/20:43959761 - isvavai.cz</a>
Result on the web
<a href="https://link.springer.com/chapter/10.1007%2F978-3-030-60276-5_18" target="_blank" >https://link.springer.com/chapter/10.1007%2F978-3-030-60276-5_18</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1007/978-3-030-60276-5_18" target="_blank" >10.1007/978-3-030-60276-5_18</a>
Alternative languages
Result language
angličtina
Original language name
Lipreading with LipsID
Original language description
This paper presents an approach for adaptation of the current visual speech recognition systems. The adaptation technique is based on LipsID features. These features represent a processed area of lips ROI. The features are extracted in a classification task by neural network pre-trained on the dataset-specific to the lip-reading system used for visual speech recognition. The training procedure for LipsID implements ArcFace loss to separate different speakers in the dataset and to provide distinctive features for every one of them. The network uses convolutional layers to extract features from input sequences of speaker images and is designed to take the same input as the lipreading system. Parallel processing of input sequence by LipsID network and lipreading network is followed by a combination of both feature sets and final recognition by Connectionist Temporal Classification (CTC) mechanism. This paper presents results from experiments with the LipNet network by re-implementing the system and comparing it with and without LipsID features. The results show a promising path for future experiments and other systems. The training and testing process of neural networks used in this work utilizes Tensorflow/Keras implementations.
Czech name
—
Czech description
—
Classification
Type
D - Article in proceedings
CEP classification
—
OECD FORD branch
20205 - Automation and control systems
Result continuities
Project
<a href="/en/project/LTARF18017" target="_blank" >LTARF18017: AMIR – Multi-modal interface based on gestures, speech and sign language for control of an assistive mobile information robot</a><br>
Continuities
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Others
Publication year
2020
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Article name in the collection
22nd International Conference, SPECOM 2020, St. Petersburg, Russia, October 7–9, 2020, Proceedings
ISBN
978-3-030-60275-8
ISSN
0302-9743
e-ISSN
1611-3349
Number of pages
8
Pages from-to
176-183
Publisher name
Springer
Place of publication
Cham
Event location
St. Petersburg, Russia
Event date
Oct 7, 2020
Type of event by nationality
WRD - Celosvětová akce
UT code for WoS article
—