Transfer Learning of Transformers for Spoken Language Understanding
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F49777513%3A23520%2F22%3A43965708" target="_blank" >RIV/49777513:23520/22:43965708 - isvavai.cz</a>
Result on the web
<a href="http://hdl.handle.net/11025/47772" target="_blank" >http://hdl.handle.net/11025/47772</a>
DOI - Digital Object Identifier
—
Alternative languages
Result language
angličtina
Original language name
Transfer Learning of Transformers for Spoken Language Understanding
Original language description
Pre-trained models used in the transfer-learning scenario are recently becoming very popular. Such models benefit from the availability of large sets of unlabeled text data. In this paper, we proposed two kinds of transformer models for dialog systems. Specifically the Wav2Vec 2.0 and T5 text-to-text transformer models are used as speech recognizer and the spoken language understanding modules. The aim of this work was to outperform the baseline model based on the DNN-HMM speech recognizer and CNN understanding.
Czech name
—
Czech description
—
Classification
Type
O - Miscellaneous
CEP classification
—
OECD FORD branch
20205 - Automation and control systems
Result continuities
Project
—
Continuities
S - Specificky vyzkum na vysokych skolach
Others
Publication year
2022
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů