Application use cases and traces
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F61989100%3A27740%2F21%3A10252004" target="_blank" >RIV/61989100:27740/21:10252004 - isvavai.cz</a>
Výsledek na webu
—
DOI - Digital Object Identifier
—
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Application use cases and traces
Popis výsledku v původním jazyce
In this second deliverable of Work Package 1, we describe the effort to build a comprehensive benchmarking suite for the IO-SEA project.In Part I, we describe the JÜlich Benchmarking Environment (JUBE), which we use to design an automated benchmarking workflow, including organising and archiving results in a remote GitLab repository.In Part II, we describe a series of synthetic diagnostic benchmarks. We include these in order to maintain a picture of the health and performance of the underlying compute platform. This allows us to more accurately attribute any changes in use-case performance to either system health, code changes, or to new IO-SEA technology developments.Finally, in Part III, we detail the use case benchmarks. In the previous deliverable D1.1, we identified, how the workflow of each IO-SEA scientific use case will take advantage of technology developed as part of the IO-SEA solution. Here we describe benchmarks specifically targeting use-case steps where we expect these solutions to provide performance enhancements.
Název v anglickém jazyce
Application use cases and traces
Popis výsledku anglicky
In this second deliverable of Work Package 1, we describe the effort to build a comprehensive benchmarking suite for the IO-SEA project.In Part I, we describe the JÜlich Benchmarking Environment (JUBE), which we use to design an automated benchmarking workflow, including organising and archiving results in a remote GitLab repository.In Part II, we describe a series of synthetic diagnostic benchmarks. We include these in order to maintain a picture of the health and performance of the underlying compute platform. This allows us to more accurately attribute any changes in use-case performance to either system health, code changes, or to new IO-SEA technology developments.Finally, in Part III, we detail the use case benchmarks. In the previous deliverable D1.1, we identified, how the workflow of each IO-SEA scientific use case will take advantage of technology developed as part of the IO-SEA solution. Here we describe benchmarks specifically targeting use-case steps where we expect these solutions to provide performance enhancements.
Klasifikace
Druh
V<sub>utaj</sub> - Výzkumná zpráva obsahující utajované informace
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
—
Návaznosti
V - Vyzkumna aktivita podporovana z jinych verejnych zdroju
Ostatní
Rok uplatnění
2021
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Počet stran výsledku
54
Místo vydání
neuveden
Název nakladatele resp. objednatele
EuroHPC
Verze
—