Adapter 与 Prompt Tuning 微调方法研究综述.
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F23%3AVSFJJTCL" target="_blank" >RIV/00216208:11320/23:VSFJJTCL - isvavai.cz</a>
Výsledek na webu
<a href="http://cea.ceaj.org/CN/Y2023/V59/I2/12" target="_blank" >http://cea.ceaj.org/CN/Y2023/V59/I2/12</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.3778/j.issn.1002-8331.2209-0025" target="_blank" >10.3778/j.issn.1002-8331.2209-0025</a>
Alternativní jazyky
Jazyk výsledku
chorvatština
Název v původním jazyce
Adapter 与 Prompt Tuning 微调方法研究综述.
Popis výsledku v původním jazyce
"Text mining is a branch of data mining, covering a variety of technologies, among which natural language processing technology is one of the core tools of text mining, which aims to help users obtain useful information from massive data. In recent years, the pre-training model has played an important role in promoting the research and development of natural language processing, and the fine-tuning method of the pre-training model has also become an important research field. On the basis of the relevant literature on the pre-training model fine-tuning method published in recent years, this paper reviews the current mainstream Adapter and Prompt methods. First of all, the development of natural language processing is briefly combed, and the problems and difficulties in fine-tuning of pre-training models are analyzed. Secondly, two kinds of fine-tuning methods: Adapter and Prompt, and the classic methods in the this two research directions are introduced. The advantages, disadvantages and performance are analyzed and summarized. Finally, this paper summarizes the limitations of the current fine-tuning methods of the pre-training model and discusses the future development direction."
Název v anglickém jazyce
Adapter 与 Prompt Tuning 微调方法研究综述.
Popis výsledku anglicky
"Text mining is a branch of data mining, covering a variety of technologies, among which natural language processing technology is one of the core tools of text mining, which aims to help users obtain useful information from massive data. In recent years, the pre-training model has played an important role in promoting the research and development of natural language processing, and the fine-tuning method of the pre-training model has also become an important research field. On the basis of the relevant literature on the pre-training model fine-tuning method published in recent years, this paper reviews the current mainstream Adapter and Prompt methods. First of all, the development of natural language processing is briefly combed, and the problems and difficulties in fine-tuning of pre-training models are analyzed. Secondly, two kinds of fine-tuning methods: Adapter and Prompt, and the classic methods in the this two research directions are introduced. The advantages, disadvantages and performance are analyzed and summarized. Finally, this paper summarizes the limitations of the current fine-tuning methods of the pre-training model and discusses the future development direction."
Klasifikace
Druh
J<sub>ost</sub> - Ostatní články v recenzovaných periodicích
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
—
Návaznosti
—
Ostatní
Rok uplatnění
2023
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název periodika
"Journal of Computer Engineering & Applications"
ISSN
1002-8331
e-ISSN
—
Svazek periodika
59
Číslo periodika v rámci svazku
2
Stát vydavatele periodika
HR - Chorvatská republika
Počet stran výsledku
10
Strana od-do
12-21
Kód UT WoS článku
—
EID výsledku v databázi Scopus
—