Planning of distributed data production for High Energy and Nuclear Physics
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216224%3A14330%2F18%3A00100898" target="_blank" >RIV/00216224:14330/18:00100898 - isvavai.cz</a>
Nalezeny alternativní kódy
RIV/61389005:_____/18:00500205
Výsledek na webu
<a href="http://dx.doi.org/10.1007/s10586-018-2834-3" target="_blank" >http://dx.doi.org/10.1007/s10586-018-2834-3</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1007/s10586-018-2834-3" target="_blank" >10.1007/s10586-018-2834-3</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Planning of distributed data production for High Energy and Nuclear Physics
Popis výsledku v původním jazyce
Modern experiments in High Energy and Nuclear Physics heavily rely on distributed computations using multiple computational facilities across the world. One of the essential types of the computations is a distributed data production where petabytes of raw files from a single source has to be processed once (per production campaign) using thousands of CPUs at distant locations and the output has to be transferred back to that source. The data distribution over a large system does not necessary match the distribution of storage, network and CPU capacity. Therefore, bottlenecks may appear and lead to increased latency and degraded performance. In this paper we propose a new scheduling approach for distributed data production which is based on the network flow maximization model. In our approach a central planner defines how much input and output data should be transferred over each network link in order to maximize the computational throughput. Such plans are created periodically for a fixed planning time interval using up-to-date information on network, storage and CPU resources. The centrally created plans are executed in a distributed manner by dedicated services running at participating sites. Our simulations based on the log records from the data production framework of the experiment STAR (Solenoid Tracker at RHIC) have shown that the proposed model systematically provides a better performance compared to the simulated traditional techniques.
Název v anglickém jazyce
Planning of distributed data production for High Energy and Nuclear Physics
Popis výsledku anglicky
Modern experiments in High Energy and Nuclear Physics heavily rely on distributed computations using multiple computational facilities across the world. One of the essential types of the computations is a distributed data production where petabytes of raw files from a single source has to be processed once (per production campaign) using thousands of CPUs at distant locations and the output has to be transferred back to that source. The data distribution over a large system does not necessary match the distribution of storage, network and CPU capacity. Therefore, bottlenecks may appear and lead to increased latency and degraded performance. In this paper we propose a new scheduling approach for distributed data production which is based on the network flow maximization model. In our approach a central planner defines how much input and output data should be transferred over each network link in order to maximize the computational throughput. Such plans are created periodically for a fixed planning time interval using up-to-date information on network, storage and CPU resources. The centrally created plans are executed in a distributed manner by dedicated services running at participating sites. Our simulations based on the log records from the data production framework of the experiment STAR (Solenoid Tracker at RHIC) have shown that the proposed model systematically provides a better performance compared to the simulated traditional techniques.
Klasifikace
Druh
J<sub>imp</sub> - Článek v periodiku v databázi Web of Science
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
Výsledek vznikl pri realizaci vícero projektů. Více informací v záložce Projekty.
Návaznosti
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Ostatní
Rok uplatnění
2018
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název periodika
Cluster Computing
ISSN
1386-7857
e-ISSN
1573-7543
Svazek periodika
21
Číslo periodika v rámci svazku
4
Stát vydavatele periodika
US - Spojené státy americké
Počet stran výsledku
17
Strana od-do
1949-1965
Kód UT WoS článku
000457276800012
EID výsledku v databázi Scopus
—