Probability-Aware Word-Confusion-Network-to-Text Alignment Approach for Intent Classification
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216305%3A26230%2F24%3APU155584" target="_blank" >RIV/00216305:26230/24:PU155584 - isvavai.cz</a>
Výsledek na webu
<a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10445934" target="_blank" >https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10445934</a>
DOI - Digital Object Identifier
—
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Probability-Aware Word-Confusion-Network-to-Text Alignment Approach for Intent Classification
Popis výsledku v původním jazyce
Spoken Language Understanding (SLU) technologies have greatly improved due to the effective pretraining of speech representations. A common requirement of industry-based solutions is the portability to deploy SLU models in voice- assistant devices. Thus, distilling knowledge from large text- based language models has become an attractive solution for achieving good performance and guaranteeing portability. In this paper, we introduce a novel architecture that uses a cross- modal attention mechanism to extract bin-level contextual embeddings from a word-confusion network (WNC) encod- ing such that these can be directly compared and aligned with traditional text-based contextual embeddings. This alignment is achieved using a recently proposed tokenwise constrastive loss function. We validate our architecture's effectiveness by fine-tuning our WCN-based pretrained model to do intent classification (IC) on the well-known SLURP dataset. Ob- tained accuracy on the IC task (81%), depicts a 9.4% r
Název v anglickém jazyce
Probability-Aware Word-Confusion-Network-to-Text Alignment Approach for Intent Classification
Popis výsledku anglicky
Spoken Language Understanding (SLU) technologies have greatly improved due to the effective pretraining of speech representations. A common requirement of industry-based solutions is the portability to deploy SLU models in voice- assistant devices. Thus, distilling knowledge from large text- based language models has become an attractive solution for achieving good performance and guaranteeing portability. In this paper, we introduce a novel architecture that uses a cross- modal attention mechanism to extract bin-level contextual embeddings from a word-confusion network (WNC) encod- ing such that these can be directly compared and aligned with traditional text-based contextual embeddings. This alignment is achieved using a recently proposed tokenwise constrastive loss function. We validate our architecture's effectiveness by fine-tuning our WCN-based pretrained model to do intent classification (IC) on the well-known SLURP dataset. Ob- tained accuracy on the IC task (81%), depicts a 9.4% r
Klasifikace
Druh
O - Ostatní výsledky
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
—
Návaznosti
R - Projekt Ramcoveho programu EK
Ostatní
Rok uplatnění
2024
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů