A Speaker Independent Approach to the Classification of Emotional Vocal Expressions
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216305%3A26220%2F08%3APU75604" target="_blank" >RIV/00216305:26220/08:PU75604 - isvavai.cz</a>
Result on the web
—
DOI - Digital Object Identifier
—
Alternative languages
Result language
čeština
Original language name
A Speaker Independent Approach to the Classification of Emotional Vocal Expressions
Original language description
The paper proposes a speaker independent procedure for classifying vocal expressions of emotion. The procedure is based on the splitting up of the emotion recognition process into two steps. In the first step, a combination of selected acoustic featuresis used to classify six emotions through a Bayesian Gaussian Mixture Model classifier (GMM). The two emotions that obtain the highest likelihood scores are selected for further processing in order to discriminate between them. For this purpose, a uniqueset of high-level acoustic features was identified using the Sequential Floating Forward Selection (SFFS) algorithm, and a GMM was used to separate between each couple of emotion. The mean classification rate is 81% with an improvement of 5% with respectto the most recent results obtained on the same database (75%).
Czech name
A Speaker Independent Approach to the Classification of Emotional Vocal Expressions
Czech description
The paper proposes a speaker independent procedure for classifying vocal expressions of emotion. The procedure is based on the splitting up of the emotion recognition process into two steps. In the first step, a combination of selected acoustic featuresis used to classify six emotions through a Bayesian Gaussian Mixture Model classifier (GMM). The two emotions that obtain the highest likelihood scores are selected for further processing in order to discriminate between them. For this purpose, a uniqueset of high-level acoustic features was identified using the Sequential Floating Forward Selection (SFFS) algorithm, and a GMM was used to separate between each couple of emotion. The mean classification rate is 81% with an improvement of 5% with respectto the most recent results obtained on the same database (75%).
Classification
Type
D - Article in proceedings
CEP classification
JA - Electronics and optoelectronics
OECD FORD branch
—
Result continuities
Project
<a href="/en/project/GA102%2F07%2F1303" target="_blank" >GA102/07/1303: Non-linear methods of speech enhancement</a><br>
Continuities
Z - Vyzkumny zamer (s odkazem do CEZ)
Others
Publication year
2008
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Article name in the collection
Proceedings of Twentieth International Conference on Tools With Artificial Intelligence, ICTAI 2008
ISBN
978-0-7695-3440-4
ISSN
—
e-ISSN
—
Number of pages
5
Pages from-to
—
Publisher name
IEEE Computer Society
Place of publication
Dayton, Ohio, USA
Event location
Dayton, Ohio
Event date
Nov 3, 2008
Type of event by nationality
WRD - Celosvětová akce
UT code for WoS article
—