Data Parallelism: How to Train Deep Learning Models on Multiple GPUs
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F61989100%3A27740%2F23%3A10253673" target="_blank" >RIV/61989100:27740/23:10253673 - isvavai.cz</a>
Výsledek na webu
<a href="https://events.it4i.cz/event/195/" target="_blank" >https://events.it4i.cz/event/195/</a>
DOI - Digital Object Identifier
—
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Data Parallelism: How to Train Deep Learning Models on Multiple GPUs
Popis výsledku v původním jazyce
Modern deep learning challenges leverage increasingly larger datasets and more complex models. As a result, significant computational power is required to train models effectively and efficiently. Learning to distribute data across multiple GPUs during training makes possible an incredible wealth of new applications that utilize deep learning.Effectively using systems with multiple GPUs also reduces training time, allowing for faster application development and much faster iteration cycles. Teams who can train with multiple GPUs have an edge, building models trained on more data in shorter periods and with greater engineer productivity.This workshop taught techniques for data-parallel deep learning training on multiple GPUs to shorten the training time required for data-intensive applications. Working with deep learning tools, frameworks, and workflows to perform neural network training, participants learned how to decrease model training time by distributing data to multiple GPUs while retaining the accuracy of training on a single GPU.
Název v anglickém jazyce
Data Parallelism: How to Train Deep Learning Models on Multiple GPUs
Popis výsledku anglicky
Modern deep learning challenges leverage increasingly larger datasets and more complex models. As a result, significant computational power is required to train models effectively and efficiently. Learning to distribute data across multiple GPUs during training makes possible an incredible wealth of new applications that utilize deep learning.Effectively using systems with multiple GPUs also reduces training time, allowing for faster application development and much faster iteration cycles. Teams who can train with multiple GPUs have an edge, building models trained on more data in shorter periods and with greater engineer productivity.This workshop taught techniques for data-parallel deep learning training on multiple GPUs to shorten the training time required for data-intensive applications. Working with deep learning tools, frameworks, and workflows to perform neural network training, participants learned how to decrease model training time by distributing data to multiple GPUs while retaining the accuracy of training on a single GPU.
Klasifikace
Druh
O - Ostatní výsledky
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
<a href="/cs/project/MC2301" target="_blank" >MC2301: National Competence Centres in the framework of EuroHPC Phase 2 - EUROCC 2</a><br>
Návaznosti
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Ostatní
Rok uplatnění
2023
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů