Structural Contrastive Pretraining for Cross-Lingual Comprehension
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F23%3AHMJ9ZHA9" target="_blank" >RIV/00216208:11320/23:HMJ9ZHA9 - isvavai.cz</a>
Result on the web
<a href="https://www.scopus.com/inward/record.uri?eid=2-s2.0-85175492961&partnerID=40&md5=11f69d8b05d3c4ce244f768fa8b01ab7" target="_blank" >https://www.scopus.com/inward/record.uri?eid=2-s2.0-85175492961&partnerID=40&md5=11f69d8b05d3c4ce244f768fa8b01ab7</a>
DOI - Digital Object Identifier
—
Alternative languages
Result language
angličtina
Original language name
Structural Contrastive Pretraining for Cross-Lingual Comprehension
Original language description
"Multilingual language models trained using various pre-training tasks like mask language modeling (MLM) have yielded encouraging results on a wide range of downstream tasks. Despite the promising performances, structural knowledge in cross-lingual corpus is less explored in current works, leading to the semantic misalignment. In this paper, we propose a new pre-training task named Structural Contrast Pretraining (SCP) to align the structural words in a parallel sentence, improving the models' linguistic versatility and their capacity to understand representations in multilingual languages. Concretely, SCP treats each structural word in source and target languages as a positive pair. We further propose Cross-lingual Momentum Contrast (CL-MoCo) to optimize negative pairs by maintaining a large size of the queue. CL-MoCo extends the original MoCo approach into cross-lingual training and jointly optimizes the source-to-target language and target-to-source language representations in SCP, resulting in a more suitable encoder for cross-lingual transfer learning. We conduct extensive experiments and prove the effectiveness of our resulting model, named XLM-SCP, on three cross-lingual tasks across five datasets such as MLQA, WikiAnn. Our codes are available at https://github.com/nuochenpku/SCP. © 2023 Association for Computational Linguistics."
Czech name
—
Czech description
—
Classification
Type
D - Article in proceedings
CEP classification
—
OECD FORD branch
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Result continuities
Project
—
Continuities
—
Others
Publication year
2023
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Article name in the collection
"Proc. Annu. Meet. Assoc. Comput Linguist."
ISBN
978-195942962-3
ISSN
0736-587X
e-ISSN
—
Number of pages
16
Pages from-to
2042-2057
Publisher name
Association for Computational Linguistics (ACL)
Place of publication
—
Event location
Melaka, Malaysia
Event date
Jan 1, 2023
Type of event by nationality
WRD - Celosvětová akce
UT code for WoS article
—