Vše

Co hledáte?

Vše
Projekty
Výsledky výzkumu
Subjekty

Rychlé hledání

  • Projekty podpořené TA ČR
  • Významné projekty
  • Projekty s nejvyšší státní podporou
  • Aktuálně běžící projekty

Chytré vyhledávání

  • Takto najdu konkrétní +slovo
  • Takto z výsledků -slovo zcela vynechám
  • “Takto můžu najít celou frázi”

Concept-aware Data Construction Improves In-context Learning of Language Models

Identifikátory výsledku

  • Kód výsledku v IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216224%3A14330%2F24%3A00136030" target="_blank" >RIV/00216224:14330/24:00136030 - isvavai.cz</a>

  • Výsledek na webu

    <a href="https://openreview.net/forum?id=KwDG8YHZnP" target="_blank" >https://openreview.net/forum?id=KwDG8YHZnP</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.48550/arXiv.2403.09703" target="_blank" >10.48550/arXiv.2403.09703</a>

Alternativní jazyky

  • Jazyk výsledku

    angličtina

  • Název v původním jazyce

    Concept-aware Data Construction Improves In-context Learning of Language Models

  • Popis výsledku v původním jazyce

    Many recent language models (LMs) of the Transformers family are capable of in-context learning (ICL), manifested in the LMs' ability to perform a new task solely from its description in a natural language input. Previous work curating these models assumes that ICL emerges from vast over-parametrization or the scale of multi-task training, but recent theoretical work attributes ICL emergence to training data properties, creating in-context learners with small, synthetic data. Inspired by these findings, we propose Concept-aware Training (CoAT), a framework for constructing training scenarios that make it beneficial for the LM to learn to utilize the analogical reasoning concepts from demonstrations. We find that by using CoAT, pre-trained transformers can learn to better utilise new latent concepts from demonstrations and that such ability makes ICL more robust to functional deficiencies of the previous models. Finally, we show that concept-aware in-context learning improves ICL performance on a majority of new tasks compared to traditional instruction tuning, reaching performance comparable to the multitask learners using magnitudes of more training data.

  • Název v anglickém jazyce

    Concept-aware Data Construction Improves In-context Learning of Language Models

  • Popis výsledku anglicky

    Many recent language models (LMs) of the Transformers family are capable of in-context learning (ICL), manifested in the LMs' ability to perform a new task solely from its description in a natural language input. Previous work curating these models assumes that ICL emerges from vast over-parametrization or the scale of multi-task training, but recent theoretical work attributes ICL emergence to training data properties, creating in-context learners with small, synthetic data. Inspired by these findings, we propose Concept-aware Training (CoAT), a framework for constructing training scenarios that make it beneficial for the LM to learn to utilize the analogical reasoning concepts from demonstrations. We find that by using CoAT, pre-trained transformers can learn to better utilise new latent concepts from demonstrations and that such ability makes ICL more robust to functional deficiencies of the previous models. Finally, we show that concept-aware in-context learning improves ICL performance on a majority of new tasks compared to traditional instruction tuning, reaching performance comparable to the multitask learners using magnitudes of more training data.

Klasifikace

  • Druh

    O - Ostatní výsledky

  • CEP obor

  • OECD FORD obor

    10302 - Condensed matter physics (including formerly solid state physics, supercond.)

Návaznosti výsledku

  • Projekt

  • Návaznosti

    S - Specificky vyzkum na vysokych skolach

Ostatní

  • Rok uplatnění

    2024

  • Kód důvěrnosti údajů

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů