Multi-Task Learning in Natural Language Processing: An Overview
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F25%3AD7U9WHXB" target="_blank" >RIV/00216208:11320/25:D7U9WHXB - isvavai.cz</a>
Výsledek na webu
<a href="https://www.scopus.com/inward/record.uri?eid=2-s2.0-85203247218&doi=10.1145%2f3663363&partnerID=40&md5=56e5b2119a708636335fa8acb175ff47" target="_blank" >https://www.scopus.com/inward/record.uri?eid=2-s2.0-85203247218&doi=10.1145%2f3663363&partnerID=40&md5=56e5b2119a708636335fa8acb175ff47</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1145/3663363" target="_blank" >10.1145/3663363</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Multi-Task Learning in Natural Language Processing: An Overview
Popis výsledku v původním jazyce
Deep learning approaches have achieved great success in the field of Natural Language Processing (NLP). However, directly training deep neural models often suffer from overfitting and data scarcity problems that are pervasive in NLP tasks. In recent years, Multi-Task Learning (MTL), which can leverage useful information of related tasks to achieve simultaneous performance improvement on these tasks, has been used to handle these problems. In this article, we give an overview of the use of MTL in NLP tasks. We first review MTL architectures used in NLP tasks and categorize them into four classes, including parallel architecture, hierarchical architecture, modular architecture, and generative adversarial architecture. Then we present optimization techniques on loss construction, gradient regularization, data sampling, and task scheduling to properly train a multi-task model. After presenting applications of MTL in a variety of NLP tasks, we introduce some benchmark datasets. Finally, we make a conclusion and discuss several possible research directions in this field. © 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.
Název v anglickém jazyce
Multi-Task Learning in Natural Language Processing: An Overview
Popis výsledku anglicky
Deep learning approaches have achieved great success in the field of Natural Language Processing (NLP). However, directly training deep neural models often suffer from overfitting and data scarcity problems that are pervasive in NLP tasks. In recent years, Multi-Task Learning (MTL), which can leverage useful information of related tasks to achieve simultaneous performance improvement on these tasks, has been used to handle these problems. In this article, we give an overview of the use of MTL in NLP tasks. We first review MTL architectures used in NLP tasks and categorize them into four classes, including parallel architecture, hierarchical architecture, modular architecture, and generative adversarial architecture. Then we present optimization techniques on loss construction, gradient regularization, data sampling, and task scheduling to properly train a multi-task model. After presenting applications of MTL in a variety of NLP tasks, we introduce some benchmark datasets. Finally, we make a conclusion and discuss several possible research directions in this field. © 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.
Klasifikace
Druh
J<sub>SC</sub> - Článek v periodiku v databázi SCOPUS
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
—
Návaznosti
—
Ostatní
Rok uplatnění
2024
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název periodika
ACM Computing Surveys
ISSN
0360-0300
e-ISSN
—
Svazek periodika
56
Číslo periodika v rámci svazku
12
Stát vydavatele periodika
US - Spojené státy americké
Počet stran výsledku
31
Strana od-do
1-31
Kód UT WoS článku
—
EID výsledku v databázi Scopus
2-s2.0-85203247218