Speaker activity driven neural speech extraction
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216305%3A26230%2F21%3APU140966" target="_blank" >RIV/00216305:26230/21:PU140966 - isvavai.cz</a>
Výsledek na webu
<a href="https://www.fit.vut.cz/research/publication/12479/" target="_blank" >https://www.fit.vut.cz/research/publication/12479/</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1109/ICASSP39728.2021.9414998" target="_blank" >10.1109/ICASSP39728.2021.9414998</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Speaker activity driven neural speech extraction
Popis výsledku v původním jazyce
Target speech extraction, which extracts the speech of a target speaker in a mixture given auxiliary speaker clues, has recently received increased interest. Various clues have been investigated such as pre-recorded enrollment utterances, direction information, or video of the target speaker. In this paper, we explore the use of speaker activity information as an auxiliary clue for single-channel neural network-based speech extraction. We propose a speaker activity driven speech extraction neural network (ADEnet) and show that it can achieve performance levels competitive with enrollmentbased approaches, without the need for pre-recordings. We further demonstrate the potential of the proposed approach for processing meeting-like recordings, where speaker activity obtained from a diarization system is used as a speaker clue for ADEnet. We show that this simple yet practical approach can successfully extract speakers after diarization, which leads to improved ASR performance when using a single microphone, especially in high overlapping conditions, with relative word error rate reduction of up to 25 %.
Název v anglickém jazyce
Speaker activity driven neural speech extraction
Popis výsledku anglicky
Target speech extraction, which extracts the speech of a target speaker in a mixture given auxiliary speaker clues, has recently received increased interest. Various clues have been investigated such as pre-recorded enrollment utterances, direction information, or video of the target speaker. In this paper, we explore the use of speaker activity information as an auxiliary clue for single-channel neural network-based speech extraction. We propose a speaker activity driven speech extraction neural network (ADEnet) and show that it can achieve performance levels competitive with enrollmentbased approaches, without the need for pre-recordings. We further demonstrate the potential of the proposed approach for processing meeting-like recordings, where speaker activity obtained from a diarization system is used as a speaker clue for ADEnet. We show that this simple yet practical approach can successfully extract speakers after diarization, which leads to improved ASR performance when using a single microphone, especially in high overlapping conditions, with relative word error rate reduction of up to 25 %.
Klasifikace
Druh
D - Stať ve sborníku
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
<a href="/cs/project/LQ1602" target="_blank" >LQ1602: IT4Innovations excellence in science</a><br>
Návaznosti
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Ostatní
Rok uplatnění
2021
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název statě ve sborníku
ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
ISBN
978-1-7281-7605-5
ISSN
—
e-ISSN
—
Počet stran výsledku
5
Strana od-do
6099-6103
Název nakladatele
IEEE Signal Processing Society
Místo vydání
Toronto
Místo konání akce
Toronto, Canada
Datum konání akce
6. 6. 2021
Typ akce podle státní příslušnosti
WRD - Celosvětová akce
Kód UT WoS článku
000704288406074