All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

How does the task complexity of masked pretraining objectives affect downstream performance?

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F23%3AUHFEIXJP" target="_blank" >RIV/00216208:11320/23:UHFEIXJP - isvavai.cz</a>

  • Result on the web

    <a href="https://www.scopus.com/inward/record.uri?eid=2-s2.0-85175439688&partnerID=40&md5=0eb598059f3ef78ab89fa48116651a54" target="_blank" >https://www.scopus.com/inward/record.uri?eid=2-s2.0-85175439688&partnerID=40&md5=0eb598059f3ef78ab89fa48116651a54</a>

  • DOI - Digital Object Identifier

Alternative languages

  • Result language

    angličtina

  • Original language name

    How does the task complexity of masked pretraining objectives affect downstream performance?

  • Original language description

    "Masked language modeling (MLM) is a widely used self-supervised pretraining objective, where a model needs to predict an original token that is replaced with a mask given contexts. Although simpler and computationally efficient pretraining objectives, e.g., predicting the first character of a masked token, have recently shown comparable results to MLM, no objectives with a masking scheme actually outperform it in downstream tasks. Motivated by the assumption that their lack of complexity plays a vital role in the degradation, we validate whether more complex masked objectives can achieve better results and investigate how much complexity they should have to perform comparably to MLM. Our results using GLUE, SQuAD, and Universal Dependencies benchmarks demonstrate that more complicated objectives tend to show better downstream results with at least half of the MLM complexity needed to perform comparably to MLM. Finally, we discuss how we should pretrain a model using a masked objective from the task complexity perspective. © 2023 Association for Computational Linguistics."

  • Czech name

  • Czech description

Classification

  • Type

    D - Article in proceedings

  • CEP classification

  • OECD FORD branch

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Result continuities

  • Project

  • Continuities

Others

  • Publication year

    2023

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Article name in the collection

    "Proc. Annu. Meet. Assoc. Comput Linguist."

  • ISBN

    978-195942962-3

  • ISSN

    0736-587X

  • e-ISSN

  • Number of pages

    11

  • Pages from-to

    10527-10537

  • Publisher name

    Association for Computational Linguistics (ACL)

  • Place of publication

  • Event location

    Dubrovnik

  • Event date

    Jan 1, 2023

  • Type of event by nationality

    WRD - Celosvětová akce

  • UT code for WoS article