How to Parse Low-Resource Languages: Cross-Lingual Parsing, Target Language Annotation, or Both?
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F19%3A10427139" target="_blank" >RIV/00216208:11320/19:10427139 - isvavai.cz</a>
Výsledek na webu
<a href="https://www.aclweb.org/anthology/W19-7713" target="_blank" >https://www.aclweb.org/anthology/W19-7713</a>
DOI - Digital Object Identifier
—
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
How to Parse Low-Resource Languages: Cross-Lingual Parsing, Target Language Annotation, or Both?
Popis výsledku v původním jazyce
To develop a parser for a language with no syntactically annotated data, we either have to developa (small) treebank for the target language or rely on cross-lingual learning or projection,or possibly use some combination of these methods. In this paper, we compare the usefulnessof cross-lingual model transfer and target language annotation for three different languages, withvarying support from closely related high-resource languages. The results show that annotatingeven a very small amount of data in the target language is superior to any cross-lingual setupand that accuracy can be further improved by adding training data from related languages in amultilingual model.
Název v anglickém jazyce
How to Parse Low-Resource Languages: Cross-Lingual Parsing, Target Language Annotation, or Both?
Popis výsledku anglicky
To develop a parser for a language with no syntactically annotated data, we either have to developa (small) treebank for the target language or rely on cross-lingual learning or projection,or possibly use some combination of these methods. In this paper, we compare the usefulnessof cross-lingual model transfer and target language annotation for three different languages, withvarying support from closely related high-resource languages. The results show that annotatingeven a very small amount of data in the target language is superior to any cross-lingual setupand that accuracy can be further improved by adding training data from related languages in amultilingual model.
Klasifikace
Druh
O - Ostatní výsledky
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
—
Návaznosti
—
Ostatní
Rok uplatnění
2019
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů