Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F23%3AR8ZJDATM" target="_blank" >RIV/00216208:11320/23:R8ZJDATM - isvavai.cz</a>
Result on the web
<a href="https://www.scopus.com/inward/record.uri?eid=2-s2.0-85162062605&partnerID=40&md5=920079626a35aa8b4aba8d7cb2b59403" target="_blank" >https://www.scopus.com/inward/record.uri?eid=2-s2.0-85162062605&partnerID=40&md5=920079626a35aa8b4aba8d7cb2b59403</a>
DOI - Digital Object Identifier
—
Alternative languages
Result language
angličtina
Original language name
Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages
Original language description
"The NLP community has mainly focused on scaling Large Language Models (LLMs) vertically, i.e., making them better for about 100 languages. We instead scale LLMs horizontally: we create, through continued pretraining, Glot500-m, an LLM that covers 511 predominantly low-resource languages. An important part of this effort is to collect and clean Glot500-c, a corpus that covers these 511 languages and allows us to train Glot500-m. We evaluate Glot500-m on five diverse tasks across these languages. We observe large improvements for both high-resource and low-resource languages compared to an XLM-R baseline. Our analysis shows that no single factor explains the quality of multilingual LLM representations. Rather, a combination of factors determines quality including corpus size, script, “help” from related languages and the total capacity of the model. Our work addresses an important goal of NLP research: we should not limit NLP to a small fraction of the world's languages and instead strive to support as many languages as possible to bring the benefits of NLP technology to all languages and cultures. Code, data and models are available at https://github.com/cisnlp/Glot500. © 2023 Association for Computational Linguistics."
Czech name
—
Czech description
—
Classification
Type
D - Article in proceedings
CEP classification
—
OECD FORD branch
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Result continuities
Project
—
Continuities
—
Others
Publication year
2023
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Article name in the collection
"Proc. Annu. Meet. Assoc. Comput Linguist."
ISBN
978-195942972-2
ISSN
0736-587X
e-ISSN
—
Number of pages
36
Pages from-to
1082-1117
Publisher name
Association for Computational Linguistics (ACL)
Place of publication
—
Event location
Melaka, Malaysia
Event date
Jan 1, 2023
Type of event by nationality
WRD - Celosvětová akce
UT code for WoS article
—