A Warm Start and a Clean Crawled Corpus - A Recipe for Good Language Models
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F22%3AY9U39UU2" target="_blank" >RIV/00216208:11320/22:Y9U39UU2 - isvavai.cz</a>
Výsledek na webu
<a href="https://arxiv.org/pdf/2201.05601v2.pdf" target="_blank" >https://arxiv.org/pdf/2201.05601v2.pdf</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.48550/arXiv.2201.05601" target="_blank" >10.48550/arXiv.2201.05601</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
A Warm Start and a Clean Crawled Corpus - A Recipe for Good Language Models
Popis výsledku v původním jazyce
We train several language models for Icelandic, including IceBERT, that achieve state-of-the-art performance in a variety of downstream tasks, including part-of-speech tagging, named entity recognition, grammatical error detection and constituency parsing. To train the models we introduce a new corpus of Icelandic text, the Icelandic Common Crawl Corpus (IC3), a collection of high quality texts found online by targeting the Icelandic top-level-domain .is. Several other public data sources are also collected for a total of 16GB of Icelandic text. To enhance the evaluation of model performance and to raise the bar in baselines for Icelandic, we manually translate and adapt the WinoGrande commonsense reasoning dataset. Through these efforts we demonstrate that a properly cleaned crawled corpus is sufficient to achieve state-of-the-art results in NLP applications for low to medium resource languages, by comparison with models trained on a curated corpus. We further show that initializing models using existing multilingual models can lead to state-of-the-art results for some downstream tasks.
Název v anglickém jazyce
A Warm Start and a Clean Crawled Corpus - A Recipe for Good Language Models
Popis výsledku anglicky
We train several language models for Icelandic, including IceBERT, that achieve state-of-the-art performance in a variety of downstream tasks, including part-of-speech tagging, named entity recognition, grammatical error detection and constituency parsing. To train the models we introduce a new corpus of Icelandic text, the Icelandic Common Crawl Corpus (IC3), a collection of high quality texts found online by targeting the Icelandic top-level-domain .is. Several other public data sources are also collected for a total of 16GB of Icelandic text. To enhance the evaluation of model performance and to raise the bar in baselines for Icelandic, we manually translate and adapt the WinoGrande commonsense reasoning dataset. Through these efforts we demonstrate that a properly cleaned crawled corpus is sufficient to achieve state-of-the-art results in NLP applications for low to medium resource languages, by comparison with models trained on a curated corpus. We further show that initializing models using existing multilingual models can lead to state-of-the-art results for some downstream tasks.
Klasifikace
Druh
D - Stať ve sborníku
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
—
Návaznosti
—
Ostatní
Rok uplatnění
2022
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název statě ve sborníku
Proceedings of the Thirteenth Language Resources and Evaluation Conference
ISBN
979-10-95546-72-6
ISSN
—
e-ISSN
—
Počet stran výsledku
11
Strana od-do
4356-4366
Název nakladatele
European Language Resources Association
Místo vydání
—
Místo konání akce
Marseille, France
Datum konání akce
1. 1. 2022
Typ akce podle státní příslušnosti
WRD - Celosvětová akce
Kód UT WoS článku
—