Optimization of Speaker-aware Multichannel Speech Extraction with ASR Criterion
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216305%3A26230%2F18%3APU130736" target="_blank" >RIV/00216305:26230/18:PU130736 - isvavai.cz</a>
Výsledek na webu
<a href="http://www.fit.vutbr.cz/research/pubs/all.php?id=11722" target="_blank" >http://www.fit.vutbr.cz/research/pubs/all.php?id=11722</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1109/ICASSP.2018.8461533" target="_blank" >10.1109/ICASSP.2018.8461533</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Optimization of Speaker-aware Multichannel Speech Extraction with ASR Criterion
Popis výsledku v původním jazyce
This paper addresses the problem of recognizing speech corrupted by overlapping speakers in a multichannel setting. To extract a target speaker from the mixture, we use a neural network based beamformer which uses masks estimated by a neural network to compute statistically optimal spatial filters. Following our previous work, we inform the neural network about the target speaker using information extracted from an adaptation utterance, enabling the network to track the target speaker. While in the previous work, this method was used to separately extract the speaker and then pass such preprocessed speech to a speech recognition system, here we explore training both systems jointly with a common speech recognition criterion. We show that integrating the two systems and training for the final objective improves the performance. In addition, the integration enables further sharing of information between the acoustic model and the speaker extraction system, by making use of the predicted HMMstate posteriors to refine the masks used for beamforming.
Název v anglickém jazyce
Optimization of Speaker-aware Multichannel Speech Extraction with ASR Criterion
Popis výsledku anglicky
This paper addresses the problem of recognizing speech corrupted by overlapping speakers in a multichannel setting. To extract a target speaker from the mixture, we use a neural network based beamformer which uses masks estimated by a neural network to compute statistically optimal spatial filters. Following our previous work, we inform the neural network about the target speaker using information extracted from an adaptation utterance, enabling the network to track the target speaker. While in the previous work, this method was used to separately extract the speaker and then pass such preprocessed speech to a speech recognition system, here we explore training both systems jointly with a common speech recognition criterion. We show that integrating the two systems and training for the final objective improves the performance. In addition, the integration enables further sharing of information between the acoustic model and the speaker extraction system, by making use of the predicted HMMstate posteriors to refine the masks used for beamforming.
Klasifikace
Druh
D - Stať ve sborníku
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
Výsledek vznikl pri realizaci vícero projektů. Více informací v záložce Projekty.
Návaznosti
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Ostatní
Rok uplatnění
2018
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název statě ve sborníku
Proceedings of ICASSP 2018
ISBN
978-1-5386-4658-8
ISSN
—
e-ISSN
—
Počet stran výsledku
5
Strana od-do
6702-6706
Název nakladatele
IEEE Signal Processing Society
Místo vydání
Calgary
Místo konání akce
Calgary
Datum konání akce
15. 4. 2018
Typ akce podle státní příslušnosti
WRD - Celosvětová akce
Kód UT WoS článku
000446384606172