All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

A Warm Start and a Clean Crawled Corpus - A Recipe for Good Language Models

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F22%3AY9U39UU2" target="_blank" >RIV/00216208:11320/22:Y9U39UU2 - isvavai.cz</a>

  • Result on the web

    <a href="https://arxiv.org/pdf/2201.05601v2.pdf" target="_blank" >https://arxiv.org/pdf/2201.05601v2.pdf</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.48550/arXiv.2201.05601" target="_blank" >10.48550/arXiv.2201.05601</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    A Warm Start and a Clean Crawled Corpus - A Recipe for Good Language Models

  • Original language description

    We train several language models for Icelandic, including IceBERT, that achieve state-of-the-art performance in a variety of downstream tasks, including part-of-speech tagging, named entity recognition, grammatical error detection and constituency parsing. To train the models we introduce a new corpus of Icelandic text, the Icelandic Common Crawl Corpus (IC3), a collection of high quality texts found online by targeting the Icelandic top-level-domain .is. Several other public data sources are also collected for a total of 16GB of Icelandic text. To enhance the evaluation of model performance and to raise the bar in baselines for Icelandic, we manually translate and adapt the WinoGrande commonsense reasoning dataset. Through these efforts we demonstrate that a properly cleaned crawled corpus is sufficient to achieve state-of-the-art results in NLP applications for low to medium resource languages, by comparison with models trained on a curated corpus. We further show that initializing models using existing multilingual models can lead to state-of-the-art results for some downstream tasks.

  • Czech name

  • Czech description

Classification

  • Type

    D - Article in proceedings

  • CEP classification

  • OECD FORD branch

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Result continuities

  • Project

  • Continuities

Others

  • Publication year

    2022

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Article name in the collection

    Proceedings of the Thirteenth Language Resources and Evaluation Conference

  • ISBN

    979-10-95546-72-6

  • ISSN

  • e-ISSN

  • Number of pages

    11

  • Pages from-to

    4356-4366

  • Publisher name

    European Language Resources Association

  • Place of publication

  • Event location

    Marseille, France

  • Event date

    Jan 1, 2022

  • Type of event by nationality

    WRD - Celosvětová akce

  • UT code for WoS article