Specializing Multilingual Language Models: An Empirical Study
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F21%3A10442245" target="_blank" >RIV/00216208:11320/21:10442245 - isvavai.cz</a>
Result on the web
—
DOI - Digital Object Identifier
—
Alternative languages
Result language
angličtina
Original language name
Specializing Multilingual Language Models: An Empirical Study
Original language description
Pretrained multilingual language models have become a common tool in transferring NLP capabilities to low-resource languages, often with adaptations. In this work, we study the performance, extensibility, and interaction of two such adaptations: vocabulary augmentation and script transliteration. Our evaluations on part-of-speech tagging, universal dependency parsing, and named entity recognition in nine diverse low-resource languages uphold the viability of these approaches while raising new questions around how to optimally adapt multilingual models to low-resource settings.
Czech name
—
Czech description
—
Classification
Type
D - Article in proceedings
CEP classification
—
OECD FORD branch
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Result continuities
Project
—
Continuities
—
Others
Publication year
2021
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Article name in the collection
Proceedings of the 1st Workshop on Multilingual Representation Learning
ISBN
978-1-954085-96-1
ISSN
—
e-ISSN
—
Number of pages
11
Pages from-to
51-61
Publisher name
Association for Computational Linguistics
Place of publication
Stroudsburg
Event location
Punta Cana
Event date
Nov 11, 2021
Type of event by nationality
WRD - Celosvětová akce
UT code for WoS article
—