All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

How Does Pre-Trained Wav2Vec 2.0 Perform on Domain-Shifted ASR? an Extensive Benchmark on Air Traffic Control Communications

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216305%3A26230%2F23%3APU149421" target="_blank" >RIV/00216305:26230/23:PU149421 - isvavai.cz</a>

  • Result on the web

    <a href="https://ieeexplore.ieee.org/document/10022724" target="_blank" >https://ieeexplore.ieee.org/document/10022724</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.1109/SLT54892.2023.10022724" target="_blank" >10.1109/SLT54892.2023.10022724</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    How Does Pre-Trained Wav2Vec 2.0 Perform on Domain-Shifted ASR? an Extensive Benchmark on Air Traffic Control Communications

  • Original language description

    Recent work on self-supervised pre-training focus on leveraging large-scale unlabeled speech data to build robust end-to-end (E2E) acoustic models (AM) that can be later fine-tuned on downstream tasks e.g., automatic speech recognition (ASR). Yet, few works investigated the impact on performance when the data properties substantially differ between the pre-training and fine-tuning phases, termed domain shift. We target this scenario by analyzing the robustness of Wav2Vec 2.0 and XLS-R models on downstream ASR for a completely unseen domain, air traffic control (ATC) communications. We benchmark these two models on several open-source and challenging ATC databases with signal-to-noise ratio between 5 to 20 dB. Relative word error rate (WER) reductions between 20% to 40% are obtained in comparison to hybrid-based ASR baselines by only fine-tuning E2E acoustic models with a smaller fraction of labeled data. We analyze WERs on the low-resource scenario and gender bias carried by one ATC dataset.

  • Czech name

  • Czech description

Classification

  • Type

    D - Article in proceedings

  • CEP classification

  • OECD FORD branch

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Result continuities

  • Project

  • Continuities

    R - Projekt Ramcoveho programu EK

Others

  • Publication year

    2023

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Article name in the collection

    IEEE Spoken Language Technology Workshop, SLT 2022 - Proceedings

  • ISBN

    978-1-6654-7189-3

  • ISSN

  • e-ISSN

  • Number of pages

    8

  • Pages from-to

    205-212

  • Publisher name

    IEEE Signal Processing Society

  • Place of publication

    Doha

  • Event location

    Doha

  • Event date

    Jan 9, 2023

  • Type of event by nationality

    WRD - Celosvětová akce

  • UT code for WoS article

    000968851900028