Parameter space factorization for zero-shot learning across tasks and languages
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F21%3A10439969" target="_blank" >RIV/00216208:11320/21:10439969 - isvavai.cz</a>
Result on the web
<a href="https://verso.is.cuni.cz/pub/verso.fpl?fname=obd_publikace_handle&handle=TsbpRe7Ziu" target="_blank" >https://verso.is.cuni.cz/pub/verso.fpl?fname=obd_publikace_handle&handle=TsbpRe7Ziu</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1162/tacl_a_00374" target="_blank" >10.1162/tacl_a_00374</a>
Alternative languages
Result language
angličtina
Original language name
Parameter space factorization for zero-shot learning across tasks and languages
Original language description
Most combinations of NLP tasks and language varieties lack in-domain examples for supervised training because of the paucity of annotated data. How can neural models make sample-efficient generalizations from task-language combinations with available data to low-resource ones? In this work, we propose a Bayesian generative model for the space of neural parameters. We assume that this space can be factorized into latent variables for each language and each task. We infer the posteriors over such latent variables based on data from seen task-language combinations through variational inference. This enables zero-shot classification on unseen combinations at prediction time. For instance, given training data for named entity recognition (NER) in Vietnamese and for part-of-speech (POS) tagging in Wolof, our model can perform accurate predictions for NER in Wolof. In particular, we experiment with a typologically diverse sample of 33 languages from 4 continents and 11 families, and show that our model yields comparable or better results than state-of-the-art, zero-shot cross-lingual transfer methods. Our code is available at github.com/cambridgeltl/parameter-factorization.
Czech name
—
Czech description
—
Classification
Type
J<sub>SC</sub> - Article in a specialist periodical, which is included in the SCOPUS database
CEP classification
—
OECD FORD branch
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Result continuities
Project
—
Continuities
—
Others
Publication year
2021
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Name of the periodical
Transactions of the Association for Computational Linguistics
ISSN
2307-387X
e-ISSN
—
Volume of the periodical
9
Issue of the periodical within the volume
01.02.2021
Country of publishing house
US - UNITED STATES
Number of pages
19
Pages from-to
410-428
UT code for WoS article
—
EID of the result in the Scopus database
2-s2.0-85110409623