Benchmark of Data Preprocessing Methods for Imbalanced Classification
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21230%2F22%3A00363255" target="_blank" >RIV/68407700:21230/22:00363255 - isvavai.cz</a>
Result on the web
<a href="https://doi.org/10.1109/BigData55660.2022.10021118" target="_blank" >https://doi.org/10.1109/BigData55660.2022.10021118</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1109/BigData55660.2022.10021118" target="_blank" >10.1109/BigData55660.2022.10021118</a>
Alternative languages
Result language
angličtina
Original language name
Benchmark of Data Preprocessing Methods for Imbalanced Classification
Original language description
Severe class imbalance is one of the main conditions that make machine learning in cybersecurity difficult. A variety of dataset preprocessing methods have been introduced over the years. These methods modify the training dataset by oversampling, undersampling or a combination of both to improve the predictive performance of classifiers trained on this dataset. Although these methods are used in cybersecurity occasionally, a comprehensive, unbiased benchmark comparing their performance over a variety of cybersecurity problems is missing. This paper presents a benchmark of 16 preprocessing methods on six cybersecurity datasets together with 17 public imbalanced datasets from other domains. We test the methods under multiple hyperparameter configurations and use an AutoML system to train classifiers on the preprocessed datasets, which reduces potential bias from specific hyperparameter or classifier choices. Special consideration is also given to evaluating the methods using appropriate performance measures that are good proxies for practical performance in real-world cybersecurity systems. The main findings of our study are: 1) Most of the time, a data preprocessing method that improves classification performance exists. 2) Baseline approach of doing nothing outperformed a large portion of methods in the benchmark. 3) Oversampling methods generally outperform undersampling methods. 4) The most significant performance gains are brought by the standard SMOTE algorithm and more complicated methods provide mainly incremental improvements at the cost of often worse computational performance.
Czech name
—
Czech description
—
Classification
Type
D - Article in proceedings
CEP classification
—
OECD FORD branch
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Result continuities
Project
—
Continuities
I - Institucionalni podpora na dlouhodoby koncepcni rozvoj vyzkumne organizace
Others
Publication year
2022
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Article name in the collection
Proceedings of 2022 IEEE International Conference on Big Data
ISBN
978-1-6654-8045-1
ISSN
—
e-ISSN
—
Number of pages
10
Pages from-to
2970-2979
Publisher name
IEEE Xplore
Place of publication
—
Event location
Osaka
Event date
Dec 17, 2022
Type of event by nationality
WRD - Celosvětová akce
UT code for WoS article
—