Polyglot Contextual Representations Improve Crosslingual Transfer
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F19%3A10427115" target="_blank" >RIV/00216208:11320/19:10427115 - isvavai.cz</a>
Result on the web
<a href="https://www.aclweb.org/anthology/N19-1392" target="_blank" >https://www.aclweb.org/anthology/N19-1392</a>
DOI - Digital Object Identifier
—
Alternative languages
Result language
angličtina
Original language name
Polyglot Contextual Representations Improve Crosslingual Transfer
Original language description
We introduce Rosita, a method to produce multilingual contextual word representations by training a single language model on text from multiple languages. Our method combines the advantages of contextual word representations with those of multilingual representation learning. We produce language models from dissimilar language pairs (English/Arabic and English/Chinese) and use them in dependency parsing, semantic role labeling, and named entity recognition, with comparisons to monolingual and non-contextual variants. Our results provide further evidence for the benefits of polyglot learning, in which representations are shared across multiple languages.
Czech name
—
Czech description
—
Classification
Type
O - Miscellaneous
CEP classification
—
OECD FORD branch
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Result continuities
Project
—
Continuities
—
Others
Publication year
2019
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů