Tensor Networks for Latent Variable Analysis: Novel Algorithms for Tensor Train Approximation
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F67985556%3A_____%2F20%3A00518308" target="_blank" >RIV/67985556:_____/20:00518308 - isvavai.cz</a>
Result on the web
<a href="https://ieeexplore.ieee.org/document/8984730" target="_blank" >https://ieeexplore.ieee.org/document/8984730</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1109/TNNLS.2019.2956926" target="_blank" >10.1109/TNNLS.2019.2956926</a>
Alternative languages
Result language
angličtina
Original language name
Tensor Networks for Latent Variable Analysis: Novel Algorithms for Tensor Train Approximation
Original language description
Decompositions of tensors into factor matrices, which interact through a core tensor, have found numerous applications in signal processing and machine learning. A more general tensor model that represents data as an ordered network of subtensors of order-2 or order-3 has, so far, not been widely considered in these fields, although this so-called tensor network (TN) decomposition has been long studied in quantum physics and scientific computing. In this article, we present novel algorithms and applications of TN decompositions, with a particular focus on the tensor train (TT) decomposition and its variants. The novel algorithms developed for the TT decomposition update, in an alternating way, one or several core tensors at each iteration and exhibit enhanced mathematical tractability and scalability for large-scale data tensors. For rigor, the cases of the given ranks, given approximation error, and the given error bound are all considered. The proposed algorithms provide well-balanced TT-decompositions and are tested in the classic paradigms of blind source separation from a single mixture, denoising, and feature extraction, achieving superior performance over the widely used truncated algorithms for TT decomposition.
Czech name
—
Czech description
—
Classification
Type
J<sub>imp</sub> - Article in a specialist periodical, which is included in the Web of Science database
CEP classification
—
OECD FORD branch
20201 - Electrical and electronic engineering
Result continuities
Project
<a href="/en/project/GA17-00902S" target="_blank" >GA17-00902S: Advanded Joint Blind Source Separation Methods</a><br>
Continuities
I - Institucionalni podpora na dlouhodoby koncepcni rozvoj vyzkumne organizace
Others
Publication year
2020
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Name of the periodical
IEEE Transactions on Neural Networks and Learning Systems
ISSN
2162-237X
e-ISSN
—
Volume of the periodical
31
Issue of the periodical within the volume
11
Country of publishing house
US - UNITED STATES
Number of pages
17
Pages from-to
4622-4636
UT code for WoS article
000587699700017
EID of the result in the Scopus database
2-s2.0-85093097685