All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

Analysis of Multilingual Sequence-to-Sequence Speech Recognition Systems

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216305%3A26230%2F19%3APU134178" target="_blank" >RIV/00216305:26230/19:PU134178 - isvavai.cz</a>

  • Result on the web

    <a href="https://www.isca-speech.org/archive/Interspeech_2019/pdfs/2355.pdf" target="_blank" >https://www.isca-speech.org/archive/Interspeech_2019/pdfs/2355.pdf</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.21437/Interspeech.2019-2355" target="_blank" >10.21437/Interspeech.2019-2355</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    Analysis of Multilingual Sequence-to-Sequence Speech Recognition Systems

  • Original language description

    This paper investigates the applications of various multilingual approaches developed in conventional deep neural network - hidden Markov model (DNN-HMM) systems to sequence-tosequence (seq2seq) automatic speech recognition (ASR). We employ a joint connectionist temporal classification-attention network as our base model. Our main contribution is separated into two parts. First, we investigate the effectiveness of the seq2seq model with stacked multilingual bottle-neck features obtained from a conventional DNN-HMM system on the Babel multilingual speech corpus. Second, we investigate the effectiveness of transfer learning from a pre-trained multilingual seq2seq model with and without the target language included in the original multilingual training data. In this experiment, we also explore various architectures and training strategies of the multilingual seq2seq model by making use of knowledge obtained in the DNN-HMM based transfer-learning. Although both approaches significantly improved the performance from a monolingual seq2seq baseline, interestingly, we found the multilingual bottle-neck features to be superior to multilingual models with transfer learning. This finding suggests that we can efficiently combine the benefits of the DNN-HMM system with the seq2seq system through multilingual bottle-neck feature techniques.

  • Czech name

  • Czech description

Classification

  • Type

    D - Article in proceedings

  • CEP classification

  • OECD FORD branch

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Result continuities

  • Project

    <a href="/en/project/LQ1602" target="_blank" >LQ1602: IT4Innovations excellence in science</a><br>

  • Continuities

    P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)<br>S - Specificky vyzkum na vysokych skolach

Others

  • Publication year

    2019

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Article name in the collection

    Proceedings of Interspeech

  • ISBN

  • ISSN

    1990-9772

  • e-ISSN

  • Number of pages

    5

  • Pages from-to

    2220-2224

  • Publisher name

    International Speech Communication Association

  • Place of publication

    Graz

  • Event location

    INTERSPEECH 2019

  • Event date

    Sep 15, 2019

  • Type of event by nationality

    WRD - Celosvětová akce

  • UT code for WoS article