A review of in-memory computing for machine learning: architectures, options
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F61989100%3A27240%2F24%3A10253776" target="_blank" >RIV/61989100:27240/24:10253776 - isvavai.cz</a>
Výsledek na webu
<a href="https://www.webofscience.com/wos/woscc/full-record/WOS:001127942300001" target="_blank" >https://www.webofscience.com/wos/woscc/full-record/WOS:001127942300001</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1108/IJWIS-08-2023-0131" target="_blank" >10.1108/IJWIS-08-2023-0131</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
A review of in-memory computing for machine learning: architectures, options
Popis výsledku v původním jazyce
Purpose: This paper aims to review in-memory computing (IMC) for machine learning (ML) applications from history, architectures and options aspects. In this review, the authors investigate different architectural aspects and collect and provide our comparative evaluations. Design/methodology/approach: Collecting over 40 IMC papers related to hardware design and optimization techniques of recent years, then classify them into three optimization option categories: optimization through graphic processing unit (GPU), optimization through reduced precision and optimization through hardware accelerator. Then, the authors brief those techniques in aspects such as what kind of data set it applied, how it is designed and what is the contribution of this design. Findings: ML algorithms are potent tools accommodated on IMC architecture. Although general-purpose hardware (central processing units and GPUs) can supply explicit solutions, their energy efficiencies have limitations because of their excessive flexibility support. On the other hand, hardware accelerators (field programmable gate arrays and application-specific integrated circuits) win on the energy efficiency aspect, but individual accelerator often adapts exclusively to ax single ML approach (family). From a long hardware evolution perspective, hardware/software collaboration heterogeneity design from hybrid platforms is an option for the researcher. Originality/value: IMC's optimization enables high-speed processing, increases performance and analyzes massive volumes of data in real-time. This work reviews IMC and its evolution. Then, the authors categorize three optimization paths for the IMC architecture to improve performance metrics. (C) 2023, Emerald Publishing Limited.
Název v anglickém jazyce
A review of in-memory computing for machine learning: architectures, options
Popis výsledku anglicky
Purpose: This paper aims to review in-memory computing (IMC) for machine learning (ML) applications from history, architectures and options aspects. In this review, the authors investigate different architectural aspects and collect and provide our comparative evaluations. Design/methodology/approach: Collecting over 40 IMC papers related to hardware design and optimization techniques of recent years, then classify them into three optimization option categories: optimization through graphic processing unit (GPU), optimization through reduced precision and optimization through hardware accelerator. Then, the authors brief those techniques in aspects such as what kind of data set it applied, how it is designed and what is the contribution of this design. Findings: ML algorithms are potent tools accommodated on IMC architecture. Although general-purpose hardware (central processing units and GPUs) can supply explicit solutions, their energy efficiencies have limitations because of their excessive flexibility support. On the other hand, hardware accelerators (field programmable gate arrays and application-specific integrated circuits) win on the energy efficiency aspect, but individual accelerator often adapts exclusively to ax single ML approach (family). From a long hardware evolution perspective, hardware/software collaboration heterogeneity design from hybrid platforms is an option for the researcher. Originality/value: IMC's optimization enables high-speed processing, increases performance and analyzes massive volumes of data in real-time. This work reviews IMC and its evolution. Then, the authors categorize three optimization paths for the IMC architecture to improve performance metrics. (C) 2023, Emerald Publishing Limited.
Klasifikace
Druh
J<sub>imp</sub> - Článek v periodiku v databázi Web of Science
CEP obor
—
OECD FORD obor
10200 - Computer and information sciences
Návaznosti výsledku
Projekt
<a href="/cs/project/LTAIN19176" target="_blank" >LTAIN19176: Metaheuristický rámec pro vícecílové kombinatorické optimalizační problémy (META MO-COP)</a><br>
Návaznosti
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Ostatní
Rok uplatnění
2024
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název periodika
International Journal of Web Information Systems
ISSN
1744-0084
e-ISSN
1744-0092
Svazek periodika
Neuveden
Číslo periodika v rámci svazku
Prosinec 2023
Stát vydavatele periodika
GB - Spojené království Velké Británie a Severního Irska
Počet stran výsledku
24
Strana od-do
—
Kód UT WoS článku
001127942300001
EID výsledku v databázi Scopus
2-s2.0-85180242489