All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

Hidden in the Layers: Interpretation of Neural Networks for Natural Language Processing

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F20%3A10424385" target="_blank" >RIV/00216208:11320/20:10424385 - isvavai.cz</a>

  • Result on the web

  • DOI - Digital Object Identifier

Alternative languages

  • Result language

    angličtina

  • Original language name

    Hidden in the Layers: Interpretation of Neural Networks for Natural Language Processing

  • Original language description

    In this book, we explore neural-network architectures and models that are used for Natural Language Processing (NLP). We analyze their internal representations (word-embeddings, hidden states, attention mechanism, and contextual embeddings) and review what properties these representations have and what kinds of linguistically interpretable features emerge in them. We use our own experimental results, as well as the results published by other research teams to present an overview of models and representations and their linguistic properties. In the beginning, we explain the basic concepts of deep learning and its usage in NLP and discuss details of the most prominent neural architectures and models. Then, we outline the concept of interpretability, different views on it, and introduce basic supervised and unsupervised methods that are used for interpreting trained neural-network models. The next part is devoted to static word embeddings. We show various methods for embeddings space visualization, compo

  • Czech name

  • Czech description

Classification

  • Type

    B - Specialist book

  • CEP classification

  • OECD FORD branch

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Result continuities

  • Project

    <a href="/en/project/GA18-02196S" target="_blank" >GA18-02196S: Linguistic Structure Representation in Neural Networks</a><br>

  • Continuities

    P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)

Others

  • Publication year

    2020

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • ISBN

    978-80-88132-10-3

  • Number of pages

    175

  • Publisher name

    Institute of Formal and Applied Linguistics

  • Place of publication

    Prague, Czechia

  • UT code for WoS book