Deep Neural Networks Based Automatic Speech Recognition for Four Ethiopian Languages
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F20%3A10426988" target="_blank" >RIV/00216208:11320/20:10426988 - isvavai.cz</a>
Výsledek na webu
<a href="https://ieeexplore.ieee.org/document/9053883" target="_blank" >https://ieeexplore.ieee.org/document/9053883</a>
DOI - Digital Object Identifier
—
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Deep Neural Networks Based Automatic Speech Recognition for Four Ethiopian Languages
Popis výsledku v původním jazyce
In this work, we present speech recognition systems for four Ethiopian languages: Amharic, Tigrigna, Oromo and Wolaytta. We have used comparable training corpora of about 20 to 29 hours speech and evaluation speech of about 1 hour for each of the languages. For Amharic and Tigrigna, lexical and language models of different vocabulary size have been developed. For Oromo and Wolaytta, the training lexicons have been used for decoding. We achieved relative word error rate (WER) reductions for all the languages by using Deep Neural Networks (DNN) based acoustic models that range from 15.1% to 31.45%. The relative improvement obtained for Wolaytta speech recognition system is much higher (31.45%) than the improvement achieved for the other languages. This attributes to the weaker language model and the bigger size of training speech we used for Wolaytta.
Název v anglickém jazyce
Deep Neural Networks Based Automatic Speech Recognition for Four Ethiopian Languages
Popis výsledku anglicky
In this work, we present speech recognition systems for four Ethiopian languages: Amharic, Tigrigna, Oromo and Wolaytta. We have used comparable training corpora of about 20 to 29 hours speech and evaluation speech of about 1 hour for each of the languages. For Amharic and Tigrigna, lexical and language models of different vocabulary size have been developed. For Oromo and Wolaytta, the training lexicons have been used for decoding. We achieved relative word error rate (WER) reductions for all the languages by using Deep Neural Networks (DNN) based acoustic models that range from 15.1% to 31.45%. The relative improvement obtained for Wolaytta speech recognition system is much higher (31.45%) than the improvement achieved for the other languages. This attributes to the weaker language model and the bigger size of training speech we used for Wolaytta.
Klasifikace
Druh
O - Ostatní výsledky
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
—
Návaznosti
—
Ostatní
Rok uplatnění
2020
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů