Experience of the WLCG data management system from the first two years of the LHC data taking
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F61389005%3A_____%2F12%3A00388537" target="_blank" >RIV/61389005:_____/12:00388537 - isvavai.cz</a>
Výsledek na webu
—
DOI - Digital Object Identifier
—
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Experience of the WLCG data management system from the first two years of the LHC data taking
Popis výsledku v původním jazyce
The start-up of the Large Hadron Collider (LHC) at CERN, the world's most powerful particle accelerator, in November 2009 has opened a new era in the High Energy Physics. Ever since the first collisions, the LHC has been performing unexpectedly well fora new machine, gathering data at an astonishing rate. The four large LHC experiments ? ALICE, ATLAS, CMS and LHCb, have been facing the challenge to record, process and give access to tens of PetaBytes (1PB=1 million GB) of data produced during the proton-proton and heavy ion collisions in the LHC. The production and analysis environments for the LHC experiments is provided by the distributed computing infrastructure managed and operated by a worldwide collaboration/project, the Worldwide LHC ComputingGrid (WLCG). The computational Grid is the only way that the masses of data produced by the collider can be processed. The WLCG is a technological leap like the collider itself and without it the project would quickly drown in its own dat
Název v anglickém jazyce
Experience of the WLCG data management system from the first two years of the LHC data taking
Popis výsledku anglicky
The start-up of the Large Hadron Collider (LHC) at CERN, the world's most powerful particle accelerator, in November 2009 has opened a new era in the High Energy Physics. Ever since the first collisions, the LHC has been performing unexpectedly well fora new machine, gathering data at an astonishing rate. The four large LHC experiments ? ALICE, ATLAS, CMS and LHCb, have been facing the challenge to record, process and give access to tens of PetaBytes (1PB=1 million GB) of data produced during the proton-proton and heavy ion collisions in the LHC. The production and analysis environments for the LHC experiments is provided by the distributed computing infrastructure managed and operated by a worldwide collaboration/project, the Worldwide LHC ComputingGrid (WLCG). The computational Grid is the only way that the masses of data produced by the collider can be processed. The WLCG is a technological leap like the collider itself and without it the project would quickly drown in its own dat
Klasifikace
Druh
J<sub>x</sub> - Nezařazeno - Článek v odborném periodiku (Jimp, Jsc a Jost)
CEP obor
BG - Jaderná, atomová a molekulová fyzika, urychlovače
OECD FORD obor
—
Návaznosti výsledku
Projekt
Výsledek vznikl pri realizaci vícero projektů. Více informací v záložce Projekty.
Návaznosti
I - Institucionalni podpora na dlouhodoby koncepcni rozvoj vyzkumne organizace
Ostatní
Rok uplatnění
2012
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název periodika
Proceedings of Science
ISSN
1824-8039
e-ISSN
—
Svazek periodika
5
Číslo periodika v rámci svazku
160
Stát vydavatele periodika
IT - Italská republika
Počet stran výsledku
10
Strana od-do
1-10
Kód UT WoS článku
—
EID výsledku v databázi Scopus
—