All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

Prefix Text as a Yarn: Eliciting Non-English Alignment in Foundation Language Model

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F25%3A5LLDRFMD" target="_blank" >RIV/00216208:11320/25:5LLDRFMD - isvavai.cz</a>

  • Result on the web

    <a href="https://www.scopus.com/inward/record.uri?eid=2-s2.0-85197193990&partnerID=40&md5=c4895ea8a5da596264097f2354211234" target="_blank" >https://www.scopus.com/inward/record.uri?eid=2-s2.0-85197193990&partnerID=40&md5=c4895ea8a5da596264097f2354211234</a>

  • DOI - Digital Object Identifier

Alternative languages

  • Result language

    angličtina

  • Original language name

    Prefix Text as a Yarn: Eliciting Non-English Alignment in Foundation Language Model

  • Original language description

    While supervised fine-tuning (SFT) has been a straightforward approach for tailoring the output of foundation large language model (LLM) to specific preferences, concerns have been raised about the depth of this alignment, with some critiques suggesting it is merely “superficial”. We critically examine this hypothesis within the scope of cross-lingual generation tasks, proposing that the effectiveness of SFT may be constrained by its reliance on prior tokens to guide cross-lingual generation. Based on this crucial insight, and in response to the challenges posed by the costly and limited availability of non-English data for SFT, we introduce a novel training-free alignment method named PRETTY, which employs minimal task-related prior tokens to bridge the foundation LLM and the SFT LLM, achieving comparable performance without training. Experiments on machine translation and part-of-speech tagging across eight languages demonstrate the efficacy of PRETTY in cross-lingual settings. Remarkably, by initiating the decoding process with only one or two prior tokens, foundation LLMs can achieve performance comparable to their SFT counterparts. This method presents a cost-effective alternative to SFT and advances the democratization of multilingual LLMs. © 2024 Association for Computational Linguistics.

  • Czech name

  • Czech description

Classification

  • Type

    D - Article in proceedings

  • CEP classification

  • OECD FORD branch

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Result continuities

  • Project

  • Continuities

Others

  • Publication year

    2024

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Article name in the collection

    Proc. Annu. Meet. Assoc. Comput Linguist.

  • ISBN

    979-889176099-8

  • ISSN

    0736-587X

  • e-ISSN

  • Number of pages

    15

  • Pages from-to

    12131-12145

  • Publisher name

    Association for Computational Linguistics (ACL)

  • Place of publication

  • Event location

    Hybrid, Bangkok

  • Event date

    Jan 1, 2025

  • Type of event by nationality

    WRD - Celosvětová akce

  • UT code for WoS article