Same Neurons, Different Languages: Probing Morphosyntax in Multilingual Pre-trained Models
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F22%3AR54RBYBP" target="_blank" >RIV/00216208:11320/22:R54RBYBP - isvavai.cz</a>
Výsledek na webu
<a href="https://aclanthology.org/2022.naacl-main.114" target="_blank" >https://aclanthology.org/2022.naacl-main.114</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.18653/v1/2022.naacl-main.114" target="_blank" >10.18653/v1/2022.naacl-main.114</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Same Neurons, Different Languages: Probing Morphosyntax in Multilingual Pre-trained Models
Popis výsledku v původním jazyce
The success of multilingual pre-trained models is underpinned by their ability to learn representations shared by multiple languages even in absence of any explicit supervision. However, it remains unclear how these models learn to generalise across languages. In this work, we conjecture that multilingual pre-trained models can derive language-universal abstractions about grammar. In particular, we investigate whether morphosyntactic information is encoded in the same subset of neurons in different languages.We conduct the first large-scale empirical study over 43 languages and 14 morphosyntactic categories with a state-of-the-art neuron-level probe. Our findings show that the cross-lingual overlap between neurons is significant, but its extent may vary across categories and depends on language proximity and pre-training data size.
Název v anglickém jazyce
Same Neurons, Different Languages: Probing Morphosyntax in Multilingual Pre-trained Models
Popis výsledku anglicky
The success of multilingual pre-trained models is underpinned by their ability to learn representations shared by multiple languages even in absence of any explicit supervision. However, it remains unclear how these models learn to generalise across languages. In this work, we conjecture that multilingual pre-trained models can derive language-universal abstractions about grammar. In particular, we investigate whether morphosyntactic information is encoded in the same subset of neurons in different languages.We conduct the first large-scale empirical study over 43 languages and 14 morphosyntactic categories with a state-of-the-art neuron-level probe. Our findings show that the cross-lingual overlap between neurons is significant, but its extent may vary across categories and depends on language proximity and pre-training data size.
Klasifikace
Druh
D - Stať ve sborníku
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
—
Návaznosti
—
Ostatní
Rok uplatnění
2022
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název statě ve sborníku
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
ISBN
978-1-955917-71-1
ISSN
—
e-ISSN
—
Počet stran výsledku
10
Strana od-do
1589-1598
Název nakladatele
Association for Computational Linguistics
Místo vydání
—
Místo konání akce
Seattle, United States
Datum konání akce
1. 1. 2022
Typ akce podle státní příslušnosti
WRD - Celosvětová akce
Kód UT WoS článku
—