Stealing and evading malware classifiers and antivirus at low false positive conditions
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21230%2F23%3A00365936" target="_blank" >RIV/68407700:21230/23:00365936 - isvavai.cz</a>
Result on the web
<a href="https://doi.org/10.1016/j.cose.2023.103192" target="_blank" >https://doi.org/10.1016/j.cose.2023.103192</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1016/j.cose.2023.103192" target="_blank" >10.1016/j.cose.2023.103192</a>
Alternative languages
Result language
angličtina
Original language name
Stealing and evading malware classifiers and antivirus at low false positive conditions
Original language description
Model stealing attacks have been successfully used in many machine learning domains, but there is little understanding of how these attacks work against models that perform malware detection. Malware detection and, in general, security domains have unique conditions. In particular, there are very strong requirements for low false positive rates (FPR). Antivirus products (AVs) that use machine learning are very complex systems to steal, malware binaries continually change, and the whole environment is adversarial by nature. This study evaluates active learning model stealing attacks against publicly available stand-alone machine learning malware classifiers and also against antivirus products. The study proposes a new neural network architecture for surrogate models (dualFFNN) and a new model stealing attack that combines transfer and active learning for surrogate creation (FFNN-TL). We achieved good surrogates of the stand-alone classifiers with up to 99% agreement with the target models, using less than 4% of the original training dataset. Good surrogates of AV systems were also trained with up to 99% agreement and less than 4000 queries. The study uses the best surrogates to generate adversarial malware to evade the target models, both stand-alone and AVs (with and without an internet connection). Results show that surrogate models can generate adversarial malware that evades the targets but with a lower success rate than directly using the target models to generate adversarial malware. Using surrogates, however, is still a good option since using the AVs for malware generation is highly time-consuming and easily detected when the AVs are connected to the internet
Czech name
—
Czech description
—
Classification
Type
J<sub>imp</sub> - Article in a specialist periodical, which is included in the Web of Science database
CEP classification
—
OECD FORD branch
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Result continuities
Project
<a href="/en/project/EF16_019%2F0000765" target="_blank" >EF16_019/0000765: Research Center for Informatics</a><br>
Continuities
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Others
Publication year
2023
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Name of the periodical
Computers & Security
ISSN
0167-4048
e-ISSN
1872-6208
Volume of the periodical
2023
Issue of the periodical within the volume
129
Country of publishing house
GB - UNITED KINGDOM
Number of pages
17
Pages from-to
1-17
UT code for WoS article
001042634700001
EID of the result in the Scopus database
2-s2.0-85150897082