Stable Low-Rank Tensor Decomposition for Compression of Convolutional Neural Network
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F67985556%3A_____%2F20%3A00534541" target="_blank" >RIV/67985556:_____/20:00534541 - isvavai.cz</a>
Výsledek na webu
<a href="http://dx.doi.org/10.1007/978-3-030-58526-6_31" target="_blank" >http://dx.doi.org/10.1007/978-3-030-58526-6_31</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1007/978-3-030-58526-6_31" target="_blank" >10.1007/978-3-030-58526-6_31</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Stable Low-Rank Tensor Decomposition for Compression of Convolutional Neural Network
Popis výsledku v původním jazyce
Most state-of-the-art deep neural networks are overparameterized and exhibit a high computational cost. A straightforward approach to this problem is to replace convolutional kernels with its low-rank tensor approximations, whereas the Canonical Polyadic tensor Decomposition is one of the most suited models. However, fitting the convolutional tensors by numerical optimization algorithms often encounters diverging components, i.e.,extremely large rank-one tensors but canceling each other. Such degeneracy often causes the non-interpretable result and numerical instability for the neural network ne-tuning. This paper is the first study on degeneracy in the tensor decomposition of convolutional kernels. We present a novel method, which can stabilize the low-rank approximation of convolutional kernels and ensure efficient compression while preserving the high quality performance of the neural networks. We evaluate our approach on popular CNN architectures for image classification and show that our method results in much lower accuracy degradation and provides consistent performance.n
Název v anglickém jazyce
Stable Low-Rank Tensor Decomposition for Compression of Convolutional Neural Network
Popis výsledku anglicky
Most state-of-the-art deep neural networks are overparameterized and exhibit a high computational cost. A straightforward approach to this problem is to replace convolutional kernels with its low-rank tensor approximations, whereas the Canonical Polyadic tensor Decomposition is one of the most suited models. However, fitting the convolutional tensors by numerical optimization algorithms often encounters diverging components, i.e.,extremely large rank-one tensors but canceling each other. Such degeneracy often causes the non-interpretable result and numerical instability for the neural network ne-tuning. This paper is the first study on degeneracy in the tensor decomposition of convolutional kernels. We present a novel method, which can stabilize the low-rank approximation of convolutional kernels and ensure efficient compression while preserving the high quality performance of the neural networks. We evaluate our approach on popular CNN architectures for image classification and show that our method results in much lower accuracy degradation and provides consistent performance.n
Klasifikace
Druh
D - Stať ve sborníku
CEP obor
—
OECD FORD obor
20201 - Electrical and electronic engineering
Návaznosti výsledku
Projekt
—
Návaznosti
I - Institucionalni podpora na dlouhodoby koncepcni rozvoj vyzkumne organizace
Ostatní
Rok uplatnění
2020
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název statě ve sborníku
ECCV 2020
ISBN
978-3-030-58525-9
ISSN
0302-9743
e-ISSN
1611-3349
Počet stran výsledku
18
Strana od-do
522-539
Název nakladatele
Springer Nature Switzerland AG 2020
Místo vydání
Cham
Místo konání akce
Glasgow
Datum konání akce
23. 8. 2020
Typ akce podle státní příslušnosti
WRD - Celosvětová akce
Kód UT WoS článku
—