All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

FLAT: Fusing layer representations for more efficient transfer learning in NLP

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F25%3A43DLRMEI" target="_blank" >RIV/00216208:11320/25:43DLRMEI - isvavai.cz</a>

  • Result on the web

    <a href="https://www.scopus.com/inward/record.uri?eid=2-s2.0-85201454475&doi=10.1016%2fj.neunet.2024.106631&partnerID=40&md5=59df237eda2be43d2cf0098d7d542033" target="_blank" >https://www.scopus.com/inward/record.uri?eid=2-s2.0-85201454475&doi=10.1016%2fj.neunet.2024.106631&partnerID=40&md5=59df237eda2be43d2cf0098d7d542033</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.1016/j.neunet.2024.106631" target="_blank" >10.1016/j.neunet.2024.106631</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    FLAT: Fusing layer representations for more efficient transfer learning in NLP

  • Original language description

    Parameter efficient transfer learning (PETL) methods provide an efficient alternative for fine-tuning. However, typical PETL methods inject the same structures to all Pre-trained Language Model (PLM) layers and only use the final hidden states for downstream tasks, regardless of the knowledge diversity across PLM layers. Additionally, the backpropagation path of existing PETL methods still passes through the frozen PLM during training, which is computational and memory inefficient. In this paper, we propose FLAT, a generic PETL method that explicitly and individually combines knowledge across all PLM layers based on the tokens to perform a better transferring. FLAT considers the backbone PLM as a feature extractor and combines the features in a side-network, hence the backpropagation does not involve the PLM, which results in much less memory requirement than previous methods. The results on the GLUE benchmark show that FLAT outperforms other tuning techniques in the low-resource scenarios and achieves on-par performance in the high-resource scenarios with only 0.53% trainable parameters per task and 3.2× less GPU memory usagewith BERTbase. Besides, further ablation study is conducted to reveal that the proposed fusion layer effectively combines knowledge from PLM and helps the classifier to exploit the PLM knowledge to downstream tasks. We will release our code for better reproducibility.

  • Czech name

  • Czech description

Classification

  • Type

    J<sub>SC</sub> - Article in a specialist periodical, which is included in the SCOPUS database

  • CEP classification

  • OECD FORD branch

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Result continuities

  • Project

  • Continuities

Others

  • Publication year

    2024

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Name of the periodical

    Neural Networks

  • ISSN

    0893-6080

  • e-ISSN

  • Volume of the periodical

    179

  • Issue of the periodical within the volume

    2024

  • Country of publishing house

    US - UNITED STATES

  • Number of pages

    15

  • Pages from-to

    1-15

  • UT code for WoS article

  • EID of the result in the Scopus database

    2-s2.0-85201454475