Analysis of impact of emotions on target speech extraction and speech separation
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216305%3A26230%2F22%3APU146144" target="_blank" >RIV/00216305:26230/22:PU146144 - isvavai.cz</a>
Result on the web
<a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9914718" target="_blank" >https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9914718</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1109/IWAENC53105.2022.9914718" target="_blank" >10.1109/IWAENC53105.2022.9914718</a>
Alternative languages
Result language
angličtina
Original language name
Analysis of impact of emotions on target speech extraction and speech separation
Original language description
Recently, the performance of blind speech separation (BSS) and target speech extraction (TSE) has greatly progressed. Most works, however, focus on relatively well-controlled conditions using, e.g., read speech. The performance may degrade in more realistic situations. One of the factors causing such degradation may be intrinsic speaker variability, such as emotions, occurring commonly in realistic speech. In this paper, we investigate the influence of emotions on TSE and BSS. We create a new test dataset of emotional mixtures for the evaluation of TSE and BSS. This dataset combines LibriSpeech and Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS). Through controlled experiments, we can analyze the impact of different emotions on the performance of BSS and TSE.We observe that BSS is relatively robust to emotions, while TSE, which requires identifying and extracting the speech of a target speaker, is much more sensitive to emotions. On comparative speaker verification experiments we show that identifying the target speaker may be particularly challenging when dealing with emotional speech. Using our findings, we outline potential future directions that could improve the robustness of BSS and TSE systems toward emotional speech.
Czech name
—
Czech description
—
Classification
Type
D - Article in proceedings
CEP classification
—
OECD FORD branch
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Result continuities
Project
<a href="/en/project/LTAIN19087" target="_blank" >LTAIN19087: Multi-linguality in speech technologies</a><br>
Continuities
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Others
Publication year
2022
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Article name in the collection
Proceedings of The 17th International Workshop on Acoustic Signal Enhancement (IWAENC 2022)
ISBN
978-1-6654-6867-1
ISSN
—
e-ISSN
—
Number of pages
5
Pages from-to
1-5
Publisher name
IEEE Signal Processing Society
Place of publication
Bamberg
Event location
Bamberg
Event date
Sep 5, 2022
Type of event by nationality
WRD - Celosvětová akce
UT code for WoS article
—