All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

75 Languages, 1 Model: Parsing Universal Dependencies Universally

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F19%3A10405574" target="_blank" >RIV/00216208:11320/19:10405574 - isvavai.cz</a>

  • Result on the web

    <a href="https://www.aclweb.org/anthology/D19-1279/" target="_blank" >https://www.aclweb.org/anthology/D19-1279/</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.18653/v1/D19-1279" target="_blank" >10.18653/v1/D19-1279</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    75 Languages, 1 Model: Parsing Universal Dependencies Universally

  • Original language description

    We present UDify, a multilingual multi-task model capable of accurately predicting universal part-of-speech, morphological features, lemmas, and dependency trees simultaneously for all 124 Universal Dependencies treebanks across 75 languages. By leveraging a multilingual BERT self-attention model pretrained on 104 languages, we found that fine-tuning it on all datasets concatenated together with simple softmax classifiers for each UD task can result in state-of-the-art UPOS, UFeats, Lemmas, UAS, and LAS scores, without requiring any recurrent or language-specific components. We evaluate UDify for multilingual learning, showing that low-resource languages benefit the most from cross-linguistic annotations. We also evaluate for zero-shot learning, with results suggesting that multilingual training provides strong UD predictions even for languages that neither UDify nor BERT have ever been trained on. Code for UDify is available at https://github.com/hyperparticle/udify.

  • Czech name

  • Czech description

Classification

  • Type

    D - Article in proceedings

  • CEP classification

  • OECD FORD branch

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Result continuities

  • Project

    Result was created during the realization of more than one project. More information in the Projects tab.

  • Continuities

    P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)

Others

  • Publication year

    2019

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Article name in the collection

    Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

  • ISBN

    978-1-950737-90-1

  • ISSN

  • e-ISSN

  • Number of pages

    17

  • Pages from-to

    2779-2795

  • Publisher name

    Association for Computational Linguistics

  • Place of publication

    Stroudsburg, PA, USA

  • Event location

    Hong Kong, China

  • Event date

    Nov 3, 2019

  • Type of event by nationality

    WRD - Celosvětová akce

  • UT code for WoS article