All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

Prediction of Inference Energy on CNN Accelerators Supporting Approximate Circuits

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216305%3A26230%2F23%3APU148180" target="_blank" >RIV/00216305:26230/23:PU148180 - isvavai.cz</a>

  • Result on the web

    <a href="https://ieeexplore.ieee.org/document/10139724" target="_blank" >https://ieeexplore.ieee.org/document/10139724</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.1109/DDECS57882.2023.10139724" target="_blank" >10.1109/DDECS57882.2023.10139724</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    Prediction of Inference Energy on CNN Accelerators Supporting Approximate Circuits

  • Original language description

    Design methodologies developed for optimizing hardware implementations of convolutional neural networks (CNN) or searching for new hardware-aware neural architectures rely on the fast and reliable estimation of key hardware parameters, such as the energy needed for one inference. Utilizing approximate circuits in hardware accelerators of CNNs faces the designers with new problems during their simulation - commonly used tools (TimeLoop, Accelergy, Maestro) do not support approximate arithmetic operations. This work addresses the fast and efficient prediction of consumed energy in hardware accelerators of CNNs that utilize approximate circuits such as approximate multipliers. First, we extend the state-of-the-art software frameworks TimeLoop and Accelergy to predict the inference energy when exact multipliers are replaced with various approximate implementations. The energies obtained using the modified tools are then considered the ground truth (reference) values. Then, we propose and evaluate, using two accelerators (Eyeriss and Simba) and two types of networks (CNNs generated by EvoApproxNAS and standard ResNet CNNs), two predictors of inference energy. We conclude that a simple predictor based on summing the energies needed for all multiplications highly correlates with the reference values if the CNN's architecture is fixed. For complex CNNs with variable architectures typically generated by neural architecture search algorithms, a more sophisticated predictor based on a machine learning model has to be employed. The proposed predictors are 420-533× faster than reference solutions.

  • Czech name

  • Czech description

Classification

  • Type

    D - Article in proceedings

  • CEP classification

  • OECD FORD branch

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Result continuities

  • Project

    <a href="/en/project/GA22-02067S" target="_blank" >GA22-02067S: AppNeCo: Approximate Neurocomputing</a><br>

  • Continuities

    P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)

Others

  • Publication year

    2023

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Article name in the collection

    2023 26th International Symposium on Design and Diagnostics of Electronic Circuits and Systems

  • ISBN

    979-8-3503-3277-3

  • ISSN

  • e-ISSN

  • Number of pages

    6

  • Pages from-to

    45-50

  • Publisher name

    Institute of Electrical and Electronics Engineers

  • Place of publication

    Talinn

  • Event location

    Tallinn

  • Event date

    May 3, 2023

  • Type of event by nationality

    WRD - Celosvětová akce

  • UT code for WoS article

    001012062000008