All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

MT4CrossOIE: Multi-stage tuning for cross-lingual open information extraction

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F25%3AG2EURCKF" target="_blank" >RIV/00216208:11320/25:G2EURCKF - isvavai.cz</a>

  • Result on the web

    <a href="https://www.scopus.com/inward/record.uri?eid=2-s2.0-85199005546&doi=10.1016%2fj.eswa.2024.124760&partnerID=40&md5=5346c2848fb28ffc346f43d5691c9ab9" target="_blank" >https://www.scopus.com/inward/record.uri?eid=2-s2.0-85199005546&doi=10.1016%2fj.eswa.2024.124760&partnerID=40&md5=5346c2848fb28ffc346f43d5691c9ab9</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.1016/j.eswa.2024.124760" target="_blank" >10.1016/j.eswa.2024.124760</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    MT4CrossOIE: Multi-stage tuning for cross-lingual open information extraction

  • Original language description

    Cross-lingual open information extraction aims to extract structured information from raw text across multiple languages. Previous work uses a shared cross-lingual pre-trained model to handle the different languages but underuses the potential of the language-specific representation. In this paper, we propose an effective multi-stage tuning framework called MT4CrossOIE, designed for enhancing cross-lingual open information extraction by injecting language-specific knowledge into the shared model. Specifically, the cross-lingual pre-trained model is first tuned in a shared semantic space (e.g., embedding matrix) in the fixed encoder and then other components are optimized in the second stage. After enough training, we freeze the pre-trained model and tune the multiple extra low-rank language-specific modules using mixture of LoRAs for model-based cross-lingual transfer. In addition, we leverage two-stage prompting to encourage the large language model (LLM) to annotate the multi-lingual raw data for data-based cross-lingual transfer. The model is trained with multi-lingual objectives on our proposed dataset OpenIE4++ by combining the model-based and data-based transfer techniques. Experimental results on various benchmarks emphasize the importance of aggregating multiple plug-in-and-play language-specific modules and demonstrate the effectiveness of MT4CrossOIE in cross-lingual OIE. © 2024

  • Czech name

  • Czech description

Classification

  • Type

    J<sub>SC</sub> - Article in a specialist periodical, which is included in the SCOPUS database

  • CEP classification

  • OECD FORD branch

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Result continuities

  • Project

  • Continuities

Others

  • Publication year

    2024

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Name of the periodical

    Expert Systems with Applications

  • ISSN

    0957-4174

  • e-ISSN

  • Volume of the periodical

    255

  • Issue of the periodical within the volume

    2024

  • Country of publishing house

    US - UNITED STATES

  • Number of pages

    12

  • Pages from-to

    1-12

  • UT code for WoS article

  • EID of the result in the Scopus database

    2-s2.0-85199005546