Efficient Combination of Classifiers for 3D Action Recognition
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216224%3A14330%2F21%3A00118857" target="_blank" >RIV/00216224:14330/21:00118857 - isvavai.cz</a>
Result on the web
<a href="https://link.springer.com/article/10.1007/s00530-021-00767-9" target="_blank" >https://link.springer.com/article/10.1007/s00530-021-00767-9</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1007/s00530-021-00767-9" target="_blank" >10.1007/s00530-021-00767-9</a>
Alternative languages
Result language
angličtina
Original language name
Efficient Combination of Classifiers for 3D Action Recognition
Original language description
The popular task of 3D human action recognition is almost exclusively solved by training deep-learning classifiers. To achieve high recognition accuracy, input 3D actions are often pre-processed by various normalization or augmentation techniques. However, it is not computationally feasible to train a classifier for each possible variant of training data in order to select the best-performing combination of pre-processing techniques for a given dataset. In this paper, we propose an evaluation procedure that determines the best combination in a very efficient way. In particular, we only train one independent classifier for each available pre-processing technique and estimate the accuracy of a specific combination by efficient fusion of the corresponding classification results based on a strict majority vote rule. In addition, for the best-ranked combination, we can retrospectively apply the normalized/augmented variants of input data to train only a single classifier. This enables to decide whether it is generally better to train a single model, or rather a set of independent classifiers whose results are fused within the classification phase. We evaluate the experiments on single-subject as well as person-interaction datasets of 3D skeleton sequences and all combinations of up to 16 normalization and augmentation techniques, some of them also proposed in this paper.
Czech name
—
Czech description
—
Classification
Type
J<sub>imp</sub> - Article in a specialist periodical, which is included in the Web of Science database
CEP classification
—
OECD FORD branch
10200 - Computer and information sciences
Result continuities
Project
<a href="/en/project/GA19-02033S" target="_blank" >GA19-02033S: Searching, Mining, and Annotating Human Motion Streams</a><br>
Continuities
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Others
Publication year
2021
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Name of the periodical
Multimedia Systems
ISSN
0942-4962
e-ISSN
1432-1882
Volume of the periodical
27
Issue of the periodical within the volume
5
Country of publishing house
US - UNITED STATES
Number of pages
12
Pages from-to
941-952
UT code for WoS article
000628724200001
EID of the result in the Scopus database
2-s2.0-85102599920