Cross-lingual transfer learning for relation extraction using Universal Dependencies
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F22%3A6TSMFV6D" target="_blank" >RIV/00216208:11320/22:6TSMFV6D - isvavai.cz</a>
Výsledek na webu
<a href="https://www.sciencedirect.com/science/article/pii/S0885230821000711" target="_blank" >https://www.sciencedirect.com/science/article/pii/S0885230821000711</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1016/j.csl.2021.101265" target="_blank" >10.1016/j.csl.2021.101265</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Cross-lingual transfer learning for relation extraction using Universal Dependencies
Popis výsledku v původním jazyce
This paper focuses on the task of cross-language relation extraction, which aims to identify the semantic relations holding between entities in the text. The goal of the task is to train classifiers for low-resource languages by means of the annotated data from high-resource languages. Related methods usually employ parallel data or Machine Translator (MT) to project annotated data from a source to a target language. However, the availability and the quality of parallel data and MT are big challenges for low-resource languages. In this paper, a novel transfer learning method is presented for this task. The key idea is to utilize a tree-based representation of data, which is highly informative for classifying semantic relations, and also shared among different languages. All the training and test data are shown using this representation. We propose to use the Universal Dependency (UD) parsing, which is a language-agnostic formalism for representation of syntactic structures. Equipping UD parse trees with multi-lingual word embeddings makes an ideal representation for the cross-language relation extraction task. We propose two deep networks to use this representation. The first one utilizes the Shortest Dependency Path of UD trees, while the second employs the UD-based positional embeddings. Experiments are performed using SemEval 2010-task 8 training data, whereas French and Farsi are the test languages. The results show 63.9% and 56.2% F1 scores, for French and Farsi test data, respectively, which are 14.4% and 17.9% higher than the baseline. This work can be considered a simple yet powerful baseline for further investigation into the cross-language tasks.
Název v anglickém jazyce
Cross-lingual transfer learning for relation extraction using Universal Dependencies
Popis výsledku anglicky
This paper focuses on the task of cross-language relation extraction, which aims to identify the semantic relations holding between entities in the text. The goal of the task is to train classifiers for low-resource languages by means of the annotated data from high-resource languages. Related methods usually employ parallel data or Machine Translator (MT) to project annotated data from a source to a target language. However, the availability and the quality of parallel data and MT are big challenges for low-resource languages. In this paper, a novel transfer learning method is presented for this task. The key idea is to utilize a tree-based representation of data, which is highly informative for classifying semantic relations, and also shared among different languages. All the training and test data are shown using this representation. We propose to use the Universal Dependency (UD) parsing, which is a language-agnostic formalism for representation of syntactic structures. Equipping UD parse trees with multi-lingual word embeddings makes an ideal representation for the cross-language relation extraction task. We propose two deep networks to use this representation. The first one utilizes the Shortest Dependency Path of UD trees, while the second employs the UD-based positional embeddings. Experiments are performed using SemEval 2010-task 8 training data, whereas French and Farsi are the test languages. The results show 63.9% and 56.2% F1 scores, for French and Farsi test data, respectively, which are 14.4% and 17.9% higher than the baseline. This work can be considered a simple yet powerful baseline for further investigation into the cross-language tasks.
Klasifikace
Druh
J<sub>imp</sub> - Článek v periodiku v databázi Web of Science
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
—
Návaznosti
—
Ostatní
Rok uplatnění
2022
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název periodika
Computer Speech and Language
ISSN
0885-2308
e-ISSN
1095-8363
Svazek periodika
71
Číslo periodika v rámci svazku
2022-1-1
Stát vydavatele periodika
GB - Spojené království Velké Británie a Severního Irska
Počet stran výsledku
12
Strana od-do
1-12
Kód UT WoS článku
000761599000006
EID výsledku v databázi Scopus
2-s2.0-85111004456