Data-driven Model Generalizability in Crosslinguistic Low-resource Morphological Segmentation
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F22%3AIFI2Y9FY" target="_blank" >RIV/00216208:11320/22:IFI2Y9FY - isvavai.cz</a>
Výsledek na webu
<a href="https://doi.org/10.1162/tacl_a_00467" target="_blank" >https://doi.org/10.1162/tacl_a_00467</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1162/tacl_a_00467" target="_blank" >10.1162/tacl_a_00467</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Data-driven Model Generalizability in Crosslinguistic Low-resource Morphological Segmentation
Popis výsledku v původním jazyce
Common designs of model evaluation typically focus on monolingual settings, where different models are compared according to their performance on a single data set that is assumed to be representative of all possible data for the task at hand. While this may be reasonable for a large data set, this assumption is difficult to maintain in low-resource scenarios, where artifacts of the data collection can yield data sets that are outliers, potentially making conclusions about model performance coincidental. To address these concerns, we investigate model generalizability in crosslinguistic low-resource scenarios. Using morphological segmentation as the test case, we compare three broad classes of models with different parameterizations, taking data from 11 languages across 6 language families. In each experimental setting, we evaluate all models on a first data set, then examine their performance consistency when introducing new randomly sampled data sets with the same size and when applying the trained models to unseen test sets of varying sizes. The results demonstrate that the extent of model generalization depends on the characteristics of the data set, and does not necessarily rely heavily on the data set size. Among the characteristics that we studied, the ratio of morpheme overlap and that of the average number of morphemes per word between the training and test sets are the two most prominent factors. Our findings suggest that future work should adopt random sampling to construct data sets with different sizes in order to make more responsible claims about model evaluation.
Název v anglickém jazyce
Data-driven Model Generalizability in Crosslinguistic Low-resource Morphological Segmentation
Popis výsledku anglicky
Common designs of model evaluation typically focus on monolingual settings, where different models are compared according to their performance on a single data set that is assumed to be representative of all possible data for the task at hand. While this may be reasonable for a large data set, this assumption is difficult to maintain in low-resource scenarios, where artifacts of the data collection can yield data sets that are outliers, potentially making conclusions about model performance coincidental. To address these concerns, we investigate model generalizability in crosslinguistic low-resource scenarios. Using morphological segmentation as the test case, we compare three broad classes of models with different parameterizations, taking data from 11 languages across 6 language families. In each experimental setting, we evaluate all models on a first data set, then examine their performance consistency when introducing new randomly sampled data sets with the same size and when applying the trained models to unseen test sets of varying sizes. The results demonstrate that the extent of model generalization depends on the characteristics of the data set, and does not necessarily rely heavily on the data set size. Among the characteristics that we studied, the ratio of morpheme overlap and that of the average number of morphemes per word between the training and test sets are the two most prominent factors. Our findings suggest that future work should adopt random sampling to construct data sets with different sizes in order to make more responsible claims about model evaluation.
Klasifikace
Druh
J<sub>imp</sub> - Článek v periodiku v databázi Web of Science
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
—
Návaznosti
—
Ostatní
Rok uplatnění
2022
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název periodika
Transactions of the Association for Computational Linguistics [online]
ISSN
2307-387X
e-ISSN
1988-2971
Svazek periodika
10
Číslo periodika v rámci svazku
2022-4-6
Stát vydavatele periodika
US - Spojené státy americké
Počet stran výsledku
21
Strana od-do
393-413
Kód UT WoS článku
000923414000004
EID výsledku v databázi Scopus
2-s2.0-85128885946