All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

Self-supervised speaker embeddings

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216305%3A26230%2F19%3APU134182" target="_blank" >RIV/00216305:26230/19:PU134182 - isvavai.cz</a>

  • Result on the web

    <a href="https://www.isca-speech.org/archive/Interspeech_2019/pdfs/2842.pdf" target="_blank" >https://www.isca-speech.org/archive/Interspeech_2019/pdfs/2842.pdf</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.21437/Interspeech.2019-2842" target="_blank" >10.21437/Interspeech.2019-2842</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    Self-supervised speaker embeddings

  • Original language description

    Contrary to i-vectors, speaker embeddings such as x-vectors are incapable of leveraging unlabelled utterances, due to the classification loss over training speakers. In this paper, we explore an alternative training strategy to enable the use of unlabelled utterances in training. We propose to train speaker embedding extractors via reconstructing the frames of a target speech segment, given the inferred embedding of another speech segment of the same utterance. We do this by attaching to the standard speaker embedding extractor a decoder network, which we feed not merely with the speaker embedding, but also with the estimated phone sequence of the target frame sequence. The reconstruction loss can be used either as a single objective, or be combined with the standard speaker classification loss. In the latter case, it acts as a regularizer, encouraging generalizability to speakers unseen during training. In all cases, the proposed architectures are trained from scratch and in an endto- end fashion. We demonstrate the benefits from the proposed approach on the VoxCeleb and Speakers in the Wild Databases, and we report notable improvements over the baseline.

  • Czech name

  • Czech description

Classification

  • Type

    D - Article in proceedings

  • CEP classification

  • OECD FORD branch

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Result continuities

  • Project

    Result was created during the realization of more than one project. More information in the Projects tab.

  • Continuities

    P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)

Others

  • Publication year

    2019

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Article name in the collection

    Proceedings of Interspeech

  • ISBN

  • ISSN

    1990-9772

  • e-ISSN

  • Number of pages

    5

  • Pages from-to

    2863-2867

  • Publisher name

    International Speech Communication Association

  • Place of publication

    Graz

  • Event location

    INTERSPEECH 2019

  • Event date

    Sep 15, 2019

  • Type of event by nationality

    WRD - Celosvětová akce

  • UT code for WoS article

    000831796403001