All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

Measuring Memorization Effect in Word-Level Neural Networks Probing

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F20%3A10424498" target="_blank" >RIV/00216208:11320/20:10424498 - isvavai.cz</a>

  • Result on the web

    <a href="https://doi.org/10.1007/978-3-030-58323-1_19" target="_blank" >https://doi.org/10.1007/978-3-030-58323-1_19</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.1007/978-3-030-58323-1_19" target="_blank" >10.1007/978-3-030-58323-1_19</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    Measuring Memorization Effect in Word-Level Neural Networks Probing

  • Original language description

    Multiple studies have probed representations emerging in neural networks trained for end-to-end NLP tasks and examined what word-level linguistic information may be encoded in the representations. In classical probing, a classifier is trained on the representations to extract the target linguistic information. However, there is a threat of the classifier simply memorizing the linguistic labels for individual words, instead of extracting the linguistic abstractions from the representations, thus reporting false positive results. While considerable efforts have been made to minimize the memorization problem, the task of actually measuring the amount of memorization happening in the classifier has been understudied so far. In our work, we propose a simple general method for measuring the memorization effect, based on a symmetric selection of comparable sets of test words seen versus unseen in training. Our method can be used to explicitly quantify the amount of memorization happening in a probing setup,

  • Czech name

  • Czech description

Classification

  • Type

    D - Article in proceedings

  • CEP classification

  • OECD FORD branch

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Result continuities

  • Project

    <a href="/en/project/GA18-02196S" target="_blank" >GA18-02196S: Linguistic Structure Representation in Neural Networks</a><br>

  • Continuities

    P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)

Others

  • Publication year

    2020

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Article name in the collection

    23rd International Conference on Text, Speech and Dialogue

  • ISBN

    978-3-030-58322-4

  • ISSN

    0302-9743

  • e-ISSN

  • Number of pages

    9

  • Pages from-to

    180-188

  • Publisher name

    Springer

  • Place of publication

    Cham, Switzerland

  • Event location

    Brno, Czechia

  • Event date

    Sep 8, 2020

  • Type of event by nationality

    WRD - Celosvětová akce

  • UT code for WoS article