All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

One Size Does Not Fit All: Finding the Optimal Subword Sizes for FastText Models across Languages

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216224%3A14330%2F21%3A00122017" target="_blank" >RIV/00216224:14330/21:00122017 - isvavai.cz</a>

  • Result on the web

    <a href="https://doi.org/10.26615/978-954-452-072-4_120" target="_blank" >https://doi.org/10.26615/978-954-452-072-4_120</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.26615/978-954-452-072-4_120" target="_blank" >10.26615/978-954-452-072-4_120</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    One Size Does Not Fit All: Finding the Optimal Subword Sizes for FastText Models across Languages

  • Original language description

    Unsupervised representation learning of words from large multilingual corpora is useful for downstream tasks such as word sense disambiguation, semantic text similarity, and information retrieval. The representation precision of log-bilinear fastText models is mostly due to their use of subword information. In previous work, the optimization of fastText's subword sizes has not been fully explored, and non-English fastText models were trained using subword sizes optimized for English and German word analogy tasks. In our work, we find the optimal subword sizes on the English, German, Czech, Italian, Spanish, French, Hindi, Turkish, and Russian word analogy tasks. We then propose a simple n-gram coverage model and we show that it predicts better-than-default subword sizes on the Spanish, French, Hindi, Turkish, and Russian word analogy tasks. We show that the optimization of fastText's subword sizes matters and results in a 14% improvement on the Czech word analogy task. We also show that expensive parameter optimization can be replaced by a simple n-gram coverage model that consistently improves the accuracy of fastText models on the word analogy tasks by up to 3% compared to the default subword sizes, and that it is within 1% accuracy of the optimal subword sizes.

  • Czech name

  • Czech description

Classification

  • Type

    D - Article in proceedings

  • CEP classification

  • OECD FORD branch

    60203 - Linguistics

Result continuities

  • Project

  • Continuities

    S - Specificky vyzkum na vysokych skolach

Others

  • Publication year

    2021

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Article name in the collection

    Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021)

  • ISBN

    9789544520724

  • ISSN

    1313-8502

  • e-ISSN

  • Number of pages

    7

  • Pages from-to

    1068-1074

  • Publisher name

    INCOMA Ltd.

  • Place of publication

    Varna, Bulgaria

  • Event location

    online

  • Event date

    Sep 1, 2021

  • Type of event by nationality

    WRD - Celosvětová akce

  • UT code for WoS article