All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

Multilingual Sequence-to-Sequence Speech Recognition: Architecture, Transfer Learning, and Language Modeling

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216305%3A26230%2F18%3APU136136" target="_blank" >RIV/00216305:26230/18:PU136136 - isvavai.cz</a>

  • Result on the web

    <a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8639655" target="_blank" >https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8639655</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.1109/SLT.2018.8639655" target="_blank" >10.1109/SLT.2018.8639655</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    Multilingual Sequence-to-Sequence Speech Recognition: Architecture, Transfer Learning, and Language Modeling

  • Original language description

    Sequence-to-sequence (seq2seq) approach for low-resource ASR is a relatively new direction in speech research. The approach benefits by performing model training without using lexicon and alignments. However, this poses a new problem of requiring more data compared to conventional DNN-HMM systems. In this work, we attempt to use data from 10 BABEL languages to build a multilingual seq2seq model as a prior model, and then port them towards 4 other BABEL languages using transfer learning approach. We also explore different architectures for improving the prior multilingual seq2seq model. The paper also discusses the effect of integrating a recurrent neural network language model (RNNLM) with a seq2seq model during decoding. Experimental results show that the transfer learning approach from the multilingual model shows substantial gains over monolingual models across all 4 BABEL languages. Incorporating an RNNLM also brings significant improvements in terms of %WER, and achieves recognition performance comparable to the models trained with twice more training data.

  • Czech name

  • Czech description

Classification

  • Type

    D - Article in proceedings

  • CEP classification

  • OECD FORD branch

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Result continuities

  • Project

    <a href="/en/project/LQ1602" target="_blank" >LQ1602: IT4Innovations excellence in science</a><br>

  • Continuities

    P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)

Others

  • Publication year

    2018

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Article name in the collection

    Proceedings of 2018 IEEE WORKSHOP ON SPOKEN LANGUAGE TECHNOLOGY (SLT 2018)

  • ISBN

    978-1-5386-4334-1

  • ISSN

  • e-ISSN

  • Number of pages

    7

  • Pages from-to

    521-527

  • Publisher name

    IEEE Signal Processing Society

  • Place of publication

    Athens

  • Event location

    Athens

  • Event date

    Dec 18, 2018

  • Type of event by nationality

    WRD - Celosvětová akce

  • UT code for WoS article

    000463141800073