A review of in-memory computing for machine learning: architectures, options
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F61989100%3A27240%2F24%3A10253776" target="_blank" >RIV/61989100:27240/24:10253776 - isvavai.cz</a>
Result on the web
<a href="https://www.webofscience.com/wos/woscc/full-record/WOS:001127942300001" target="_blank" >https://www.webofscience.com/wos/woscc/full-record/WOS:001127942300001</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1108/IJWIS-08-2023-0131" target="_blank" >10.1108/IJWIS-08-2023-0131</a>
Alternative languages
Result language
angličtina
Original language name
A review of in-memory computing for machine learning: architectures, options
Original language description
Purpose: This paper aims to review in-memory computing (IMC) for machine learning (ML) applications from history, architectures and options aspects. In this review, the authors investigate different architectural aspects and collect and provide our comparative evaluations. Design/methodology/approach: Collecting over 40 IMC papers related to hardware design and optimization techniques of recent years, then classify them into three optimization option categories: optimization through graphic processing unit (GPU), optimization through reduced precision and optimization through hardware accelerator. Then, the authors brief those techniques in aspects such as what kind of data set it applied, how it is designed and what is the contribution of this design. Findings: ML algorithms are potent tools accommodated on IMC architecture. Although general-purpose hardware (central processing units and GPUs) can supply explicit solutions, their energy efficiencies have limitations because of their excessive flexibility support. On the other hand, hardware accelerators (field programmable gate arrays and application-specific integrated circuits) win on the energy efficiency aspect, but individual accelerator often adapts exclusively to ax single ML approach (family). From a long hardware evolution perspective, hardware/software collaboration heterogeneity design from hybrid platforms is an option for the researcher. Originality/value: IMC's optimization enables high-speed processing, increases performance and analyzes massive volumes of data in real-time. This work reviews IMC and its evolution. Then, the authors categorize three optimization paths for the IMC architecture to improve performance metrics. (C) 2023, Emerald Publishing Limited.
Czech name
—
Czech description
—
Classification
Type
J<sub>imp</sub> - Article in a specialist periodical, which is included in the Web of Science database
CEP classification
—
OECD FORD branch
10200 - Computer and information sciences
Result continuities
Project
<a href="/en/project/LTAIN19176" target="_blank" >LTAIN19176: Metaheuristics Framework for Multi-objective Combinatorial Optimization Problems (META MO-COP)</a><br>
Continuities
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Others
Publication year
2024
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Name of the periodical
International Journal of Web Information Systems
ISSN
1744-0084
e-ISSN
1744-0092
Volume of the periodical
Neuveden
Issue of the periodical within the volume
Prosinec 2023
Country of publishing house
GB - UNITED KINGDOM
Number of pages
24
Pages from-to
—
UT code for WoS article
001127942300001
EID of the result in the Scopus database
2-s2.0-85180242489