Highly scalable FETI methods in ESPRESO
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F61989100%3A27740%2F15%3A86099018" target="_blank" >RIV/61989100:27740/15:86099018 - isvavai.cz</a>
Nalezeny alternativní kódy
RIV/61989100:27230/15:86099018
Výsledek na webu
<a href="http://dx.doi.org/10.4203/ccp.107.17" target="_blank" >http://dx.doi.org/10.4203/ccp.107.17</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.4203/ccp.107.17" target="_blank" >10.4203/ccp.107.17</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Highly scalable FETI methods in ESPRESO
Popis výsledku v původním jazyce
This paper describes our new hybrid parallelization of the Finite Element Tearing and Interconnecting (FETI) method for the multi-socket and multi-core computer clusters. This is an essential step in our development of the Hybrid FETI solver were a small number of neighboring subdomains is aggregated into clusters and each cluster is processed by a single compute node. In our previous work the FETI solver was implemented using MPI parallelization into our ESPRESO solver. The proposed hybrid implementation provides better utilization of resources of modern HPC machines using advanced shared memory runtime systems such as Cilk++ runtime. Cilk++ is an alternative to OpenMP which is used by the ESPRESO for shared memory parallelization. We have compared the performance of the hybrid parallelization to MPI-only parallelization. The results show that we have reduced both solver runtime and memory utilization. This allows the solver to use a larger number of smaller sub-domains and in order to solve larger problems using a limited number of compute nodes. This feature is essential for users with smaller computer clusters.
Název v anglickém jazyce
Highly scalable FETI methods in ESPRESO
Popis výsledku anglicky
This paper describes our new hybrid parallelization of the Finite Element Tearing and Interconnecting (FETI) method for the multi-socket and multi-core computer clusters. This is an essential step in our development of the Hybrid FETI solver were a small number of neighboring subdomains is aggregated into clusters and each cluster is processed by a single compute node. In our previous work the FETI solver was implemented using MPI parallelization into our ESPRESO solver. The proposed hybrid implementation provides better utilization of resources of modern HPC machines using advanced shared memory runtime systems such as Cilk++ runtime. Cilk++ is an alternative to OpenMP which is used by the ESPRESO for shared memory parallelization. We have compared the performance of the hybrid parallelization to MPI-only parallelization. The results show that we have reduced both solver runtime and memory utilization. This allows the solver to use a larger number of smaller sub-domains and in order to solve larger problems using a limited number of compute nodes. This feature is essential for users with smaller computer clusters.
Klasifikace
Druh
D - Stať ve sborníku
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
Výsledek vznikl pri realizaci vícero projektů. Více informací v záložce Projekty.
Návaznosti
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Ostatní
Rok uplatnění
2015
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název statě ve sborníku
Civil-Comp proceedings. Volume 107
ISBN
978-1-905088-62-1
ISSN
1759-3433
e-ISSN
neuvedeno
Počet stran výsledku
13
Strana od-do
"nestrankovano"
Název nakladatele
Civil-Comp Press
Místo vydání
Kippen, Stirlingshire
Místo konání akce
Dubrovník
Datum konání akce
24. 3. 2015
Typ akce podle státní příslušnosti
WRD - Celosvětová akce
Kód UT WoS článku
—