Zero-shot Dependency Parsing with Pre-trained Multilingual Sentence Representations
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F19%3A10427164" target="_blank" >RIV/00216208:11320/19:10427164 - isvavai.cz</a>
Result on the web
<a href="https://www.aclweb.org/anthology/D19-6132" target="_blank" >https://www.aclweb.org/anthology/D19-6132</a>
DOI - Digital Object Identifier
—
Alternative languages
Result language
angličtina
Original language name
Zero-shot Dependency Parsing with Pre-trained Multilingual Sentence Representations
Original language description
We investigate whether off-the-shelf deep bidirectional sentence representations (Devlin et al., 2019) trained on a massively multilingual corpus (multilingual BERT) enable the development of an unsupervised universal dependency parser. This approach only leverages a mix of monolingual corpora in many languages and does not require any translation data making it applicable to low-resource languages. In our experiments we outperform the best CoNLL 2018 language-specific systems in all of the shared task's six truly low-resource languages while using a single system. However, we also find that (i) parsing accuracy still varies dramatically when changing the training languages and (ii) in some target languages zero-shot transfer fails under all tested conditions, raising concerns on the `universality' of the whole approach.
Czech name
—
Czech description
—
Classification
Type
O - Miscellaneous
CEP classification
—
OECD FORD branch
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Result continuities
Project
—
Continuities
—
Others
Publication year
2019
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů