All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

DESCNet: Developing Efficient Scratchpad Memories for Capsule Network Hardware

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216305%3A26230%2F21%3APU138926" target="_blank" >RIV/00216305:26230/21:PU138926 - isvavai.cz</a>

  • Result on the web

    <a href="https://ieeexplore.ieee.org/document/9222370" target="_blank" >https://ieeexplore.ieee.org/document/9222370</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.1109/TCAD.2020.3030610" target="_blank" >10.1109/TCAD.2020.3030610</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    DESCNet: Developing Efficient Scratchpad Memories for Capsule Network Hardware

  • Original language description

    Deep Neural Networks (DNNs) have been established as the state-of-the-art method for advanced machine learning applications. Recently proposed by the Google Brain's team, the Capsule Networks (CapsNets) have improved the generalization ability, as compared to DNNs, due to their multi-dimensional capsules and preserving the spatial relationship between different objects. However, they pose significantly high computational and memory requirements, making their energy-efficient inference a challenging task. This paper provides, for the first time, an in-depth analysis to highlight the design- and run-time challenges for the (on-chip scratchpad) memories deployed in hardware accelerators executing fast CapsNets inference. To enable an efficient design, we propose an application-specific memory architecture, called DESCNet, which minimizes the off-chip memory accesses, while efficiently feeding the data to the hardware accelerator executing CapsNets inference. We analyze the corresponding on-chip memory requirement, and leverage it to propose a methodology for exploring different scratchpad memory designs and their energy/area trade-offs. Afterwards, an application-specific power-gating technique for the on-chip scratchpad memory is employed to further reduce its energy consumption, depending upon the mapped dataflow of the CapsNet and the utilization across different operations of its processing.  We integrated our DESCNet memory design, as well as another state-of-the-art memory design for comparison studies, with an open-source DNN accelerator executing Google's CapsNet model for the MNIST dataset. We also enhanced the design to execute the recent deep CapsNet model for the CIFAR10 dataset. Note: we use the same benchmarks and test conditions for which these CapsNets have been proposed and evaluated by their respective teams. The complete hardware is synthesized for a 32nm CMOS technology using the ASIC-design flow with Synopsys tools a

  • Czech name

  • Czech description

Classification

  • Type

    J<sub>imp</sub> - Article in a specialist periodical, which is included in the Web of Science database

  • CEP classification

  • OECD FORD branch

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Result continuities

  • Project

    <a href="/en/project/GA19-10137S" target="_blank" >GA19-10137S: Designing and exploiting libraries of approximate circuits</a><br>

  • Continuities

    P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)

Others

  • Publication year

    2021

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Name of the periodical

    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS

  • ISSN

    0278-0070

  • e-ISSN

    1937-4151

  • Volume of the periodical

    40

  • Issue of the periodical within the volume

    9

  • Country of publishing house

    US - UNITED STATES

  • Number of pages

    14

  • Pages from-to

    1768-1781

  • UT code for WoS article

    000686757700007

  • EID of the result in the Scopus database

    2-s2.0-85092935204