Parsing with Multilingual BERT, a Small Corpus, and a Small Treebank
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F20%3A10426917" target="_blank" >RIV/00216208:11320/20:10426917 - isvavai.cz</a>
Result on the web
<a href="https://www.aclweb.org/anthology/2020.findings-emnlp.118" target="_blank" >https://www.aclweb.org/anthology/2020.findings-emnlp.118</a>
DOI - Digital Object Identifier
—
Alternative languages
Result language
angličtina
Original language name
Parsing with Multilingual BERT, a Small Corpus, and a Small Treebank
Original language description
Pretrained multilingual contextual representations have shown great success, but due to the limits of their pretraining data, their benefits do not apply equally to all language varieties. This presents a challenge for language varieties unfamiliar to these models, whose labeled and unlabeled data is too limited to train a monolingual model effectively. We propose the use of additional language-specific pretraining and vocabulary augmentation to adapt multilingual models to low-resource settings. Using dependency parsing of four diverse low-resource language varieties as a case study, we show that these methods significantly improve performance over baselines, especially in the lowest-resource cases, and demonstrate the importance of the relationship between such models' pretraining data and target language varieties.
Czech name
—
Czech description
—
Classification
Type
O - Miscellaneous
CEP classification
—
OECD FORD branch
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Result continuities
Project
—
Continuities
—
Others
Publication year
2020
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů