All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

Deep LSTM Spoken Term Detection using Wav2Vec 2.0 Recognizer

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F49777513%3A23520%2F22%3A43965704" target="_blank" >RIV/49777513:23520/22:43965704 - isvavai.cz</a>

  • Result on the web

    <a href="https://www.isca-speech.org/archive/interspeech_2022/svec22_interspeech.html" target="_blank" >https://www.isca-speech.org/archive/interspeech_2022/svec22_interspeech.html</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.21437/Interspeech.2022-10409" target="_blank" >10.21437/Interspeech.2022-10409</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    Deep LSTM Spoken Term Detection using Wav2Vec 2.0 Recognizer

  • Original language description

    In recent years, the standard hybrid DNN-HMM speech recognizers are outperformed by the end-to-end speech recognition systems. One of the very promising approaches is the grapheme Wav2Vec 2.0 model, which uses the self-supervised pretraining approach combined with transfer learning of the fine-tuned speech recognizer. Since it lacks the pronunciation vocabulary and language model, the approach is suitable for tasks where obtaining such models is not easy or almost impossible. In this paper, we use the Wav2Vec speech recognizer in the task of spoken term detection over a large set of spoken documents. The method employs a deep LSTM network which maps the recognized hypothesis and the searched term into a shared pronunciation embedding space in which the term occurrences and the assigned scores are easily computed. The paper describes a bootstrapping approach that allows the transfer of the knowledge contained in traditional pronunciation vocabulary of DNN-HMM hybrid ASR into the context of grapheme-based Wav2Vec. The proposed method outperforms the previously published system based on the combination of the DNN-HMM hybrid ASR and phoneme recognizer by a large margin on the MALACH data in both English and Czech languages.

  • Czech name

  • Czech description

Classification

  • Type

    D - Article in proceedings

  • CEP classification

  • OECD FORD branch

    20205 - Automation and control systems

Result continuities

  • Project

    <a href="/en/project/VJ01010108" target="_blank" >VJ01010108: Robust processing of recordings for operations and security</a><br>

  • Continuities

Others

  • Publication year

    2022

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Article name in the collection

    Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH

  • ISBN

  • ISSN

    2308-457X

  • e-ISSN

  • Number of pages

    5

  • Pages from-to

    1886-1890

  • Publisher name

    International Speech Communication Association

  • Place of publication

    New York

  • Event location

    Incheon, Korea

  • Event date

    Sep 18, 2022

  • Type of event by nationality

    WRD - Celosvětová akce

  • UT code for WoS article