One Size Does Not Fit All: Finding the Optimal Subword Sizes for FastText Models across Languages
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216224%3A14330%2F21%3A00122017" target="_blank" >RIV/00216224:14330/21:00122017 - isvavai.cz</a>
Výsledek na webu
<a href="https://doi.org/10.26615/978-954-452-072-4_120" target="_blank" >https://doi.org/10.26615/978-954-452-072-4_120</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.26615/978-954-452-072-4_120" target="_blank" >10.26615/978-954-452-072-4_120</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
One Size Does Not Fit All: Finding the Optimal Subword Sizes for FastText Models across Languages
Popis výsledku v původním jazyce
Unsupervised representation learning of words from large multilingual corpora is useful for downstream tasks such as word sense disambiguation, semantic text similarity, and information retrieval. The representation precision of log-bilinear fastText models is mostly due to their use of subword information. In previous work, the optimization of fastText's subword sizes has not been fully explored, and non-English fastText models were trained using subword sizes optimized for English and German word analogy tasks. In our work, we find the optimal subword sizes on the English, German, Czech, Italian, Spanish, French, Hindi, Turkish, and Russian word analogy tasks. We then propose a simple n-gram coverage model and we show that it predicts better-than-default subword sizes on the Spanish, French, Hindi, Turkish, and Russian word analogy tasks. We show that the optimization of fastText's subword sizes matters and results in a 14% improvement on the Czech word analogy task. We also show that expensive parameter optimization can be replaced by a simple n-gram coverage model that consistently improves the accuracy of fastText models on the word analogy tasks by up to 3% compared to the default subword sizes, and that it is within 1% accuracy of the optimal subword sizes.
Název v anglickém jazyce
One Size Does Not Fit All: Finding the Optimal Subword Sizes for FastText Models across Languages
Popis výsledku anglicky
Unsupervised representation learning of words from large multilingual corpora is useful for downstream tasks such as word sense disambiguation, semantic text similarity, and information retrieval. The representation precision of log-bilinear fastText models is mostly due to their use of subword information. In previous work, the optimization of fastText's subword sizes has not been fully explored, and non-English fastText models were trained using subword sizes optimized for English and German word analogy tasks. In our work, we find the optimal subword sizes on the English, German, Czech, Italian, Spanish, French, Hindi, Turkish, and Russian word analogy tasks. We then propose a simple n-gram coverage model and we show that it predicts better-than-default subword sizes on the Spanish, French, Hindi, Turkish, and Russian word analogy tasks. We show that the optimization of fastText's subword sizes matters and results in a 14% improvement on the Czech word analogy task. We also show that expensive parameter optimization can be replaced by a simple n-gram coverage model that consistently improves the accuracy of fastText models on the word analogy tasks by up to 3% compared to the default subword sizes, and that it is within 1% accuracy of the optimal subword sizes.
Klasifikace
Druh
D - Stať ve sborníku
CEP obor
—
OECD FORD obor
60203 - Linguistics
Návaznosti výsledku
Projekt
—
Návaznosti
S - Specificky vyzkum na vysokych skolach
Ostatní
Rok uplatnění
2021
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název statě ve sborníku
Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021)
ISBN
9789544520724
ISSN
1313-8502
e-ISSN
—
Počet stran výsledku
7
Strana od-do
1068-1074
Název nakladatele
INCOMA Ltd.
Místo vydání
Varna, Bulgaria
Místo konání akce
online
Datum konání akce
1. 9. 2021
Typ akce podle státní příslušnosti
WRD - Celosvětová akce
Kód UT WoS článku
—