ADAM & RAL: Adaptive Memory Learning and Reinforcement Active Learning for Network Monitoring
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21230%2F19%3A00338616" target="_blank" >RIV/68407700:21230/19:00338616 - isvavai.cz</a>
Result on the web
<a href="https://doi.org/10.23919/CNSM46954.2019.9012675" target="_blank" >https://doi.org/10.23919/CNSM46954.2019.9012675</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.23919/CNSM46954.2019.9012675" target="_blank" >10.23919/CNSM46954.2019.9012675</a>
Alternative languages
Result language
angličtina
Original language name
ADAM & RAL: Adaptive Memory Learning and Reinforcement Active Learning for Network Monitoring
Original language description
Network-traffic data commonly arrives in the form of fast data streams; online network-monitoring systems continuously analyze these kinds of streams, sequentially collecting measurements over time. Continuous and dynamic learning is an effective learning strategy when operating in these fast and dynamic environments, where concept drifts constantly occur. In this paper, we propose different approaches for stream-based machine learning, able to analyze network-traffic streams on the fly, using supervised learning techniques. We address two major challenges associated to stream-based machine learning and online network monitoring: (i) how to dynamically learn from and adapt to non-stationary data and patterns changing over time, and (ii) how to deal with the limited availability of ground truth or labeled data to continuously tune a supervised learning model. We introduce ADAM & RAL, two stream-based machine-learning approaches to tackle these challenges. ADAM implements multiple stream-based machine-learning models and relies on an adaptive memory strategy to dynamically adapt the size of the system's learning memory to the most recent data distribution, triggering new learning steps when concept drifts are detected. RAL implements a stream-based active-learning strategy to reduce the amount of labeled data needed for stream-based learning, dynamically deciding on the most informative samples to integrate into the continuous learning scheme. Using a reinforcement learning loop, RAL improves prediction performance by additionally learning from the goodness of its previous sample-selection decisions. We focus on a particularly challenging problem in network monitoring: continuously tuning detection models able to recognize network attacks over time.
Czech name
—
Czech description
—
Classification
Type
D - Article in proceedings
CEP classification
—
OECD FORD branch
20202 - Communication engineering and systems
Result continuities
Project
—
Continuities
I - Institucionalni podpora na dlouhodoby koncepcni rozvoj vyzkumne organizace
Others
Publication year
2019
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Article name in the collection
CNSM 15th International Conference on Network and Service Management
ISBN
978-3-903176-24-9
ISSN
—
e-ISSN
—
Number of pages
9
Pages from-to
—
Publisher name
IEEE
Place of publication
St. Paul, Minnesota
Event location
Halifax
Event date
Oct 21, 2019
Type of event by nationality
WRD - Celosvětová akce
UT code for WoS article
000552229800017