Decomposed Meta-Learning for Few-Shot Sequence Labeling
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F25%3AEIUTYMTK" target="_blank" >RIV/00216208:11320/25:EIUTYMTK - isvavai.cz</a>
Result on the web
<a href="https://www.scopus.com/inward/record.uri?eid=2-s2.0-85187378886&doi=10.1109%2fTASLP.2024.3372879&partnerID=40&md5=5e65c0a82d01180d43b50077e344f24f" target="_blank" >https://www.scopus.com/inward/record.uri?eid=2-s2.0-85187378886&doi=10.1109%2fTASLP.2024.3372879&partnerID=40&md5=5e65c0a82d01180d43b50077e344f24f</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1109/TASLP.2024.3372879" target="_blank" >10.1109/TASLP.2024.3372879</a>
Alternative languages
Result language
angličtina
Original language name
Decomposed Meta-Learning for Few-Shot Sequence Labeling
Original language description
Few-shot sequence labeling is a general problem formulation for many natural language understanding tasks in data-scarcity scenarios, which require models to generalize to new types via only a few labeled examples. Recent advances mostly adopt metric-based meta-learning and thus face the challenges of modeling the miscellaneous Other prototype and the inability to generalize to classes with large domain gaps. To overcome these challenges, we propose a decomposed meta-learning framework for few-shot sequence labeling that breaks down the task into few-shot mention detection and few-shot type classification, and sequentially tackles them via meta-learning. Specifically, we employ model-agnostic meta-learning (MAML) to prompt the mention detection model to learn boundary knowledge shared across types. With the detected mention spans, we further leverage the MAML-enhanced span-level prototypical network for few-shot type classification. In this way, the decomposition framework bypasses the requirement of modeling the miscellaneous Other prototype. Meanwhile, the adoption of the MAML algorithm enables us to explore the knowledge contained in support examples more efficiently, so that our model can quickly adapt to new types using only a few labeled examples. Under our framework, we explore a basic implementation that uses two separate models for the two subtasks. We further propose a joint model to reduce model size and inference time, making our framework more applicable for scenarios with limited resources. Extensive experiments on nine benchmark datasets, including named entity recognition, slot tagging, event detection, and part-of-speech tagging, show that the proposed approach achieves start-of-the-art performance across various few-shot sequence labeling tasks. © 2014 IEEE.
Czech name
—
Czech description
—
Classification
Type
J<sub>SC</sub> - Article in a specialist periodical, which is included in the SCOPUS database
CEP classification
—
OECD FORD branch
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Result continuities
Project
—
Continuities
—
Others
Publication year
2024
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Name of the periodical
IEEE/ACM Transactions on Audio Speech and Language Processing
ISSN
2329-9290
e-ISSN
—
Volume of the periodical
32
Issue of the periodical within the volume
2024
Country of publishing house
US - UNITED STATES
Number of pages
14
Pages from-to
1980-1993
UT code for WoS article
—
EID of the result in the Scopus database
2-s2.0-85187378886