All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

Are Multilingual Neural Machine Translation Models Better at Capturing Linguistic Features?

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F20%3A10424330" target="_blank" >RIV/00216208:11320/20:10424330 - isvavai.cz</a>

  • Result on the web

    <a href="https://verso.is.cuni.cz/pub/verso.fpl?fname=obd_publikace_handle&handle=Ar.-9Myndf" target="_blank" >https://verso.is.cuni.cz/pub/verso.fpl?fname=obd_publikace_handle&handle=Ar.-9Myndf</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.14712/00326585.009" target="_blank" >10.14712/00326585.009</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    Are Multilingual Neural Machine Translation Models Better at Capturing Linguistic Features?

  • Original language description

    We investigate the effect of training NMT models on multiple target languages. We hypothesize that the integration of multiple languages and the increase of linguistic diversity will lead to a stronger representation of syntactic and semantic features captured by the model. We test our hypothesis on two different NMT architectures: The widely-used Transformer architecture and the Attention Bridge architecture. We train models on Europarl data and quantify the level of syntactic and semantic information discovered by the models using three different methods: SentEval linguistic probing tasks, an analysis of the attention structures regarding the inherent phrase and dependency information and a structural probe on contextualized word representations. Our results show evidence that with growing number of target languages the Attention Bridge model increasingly picks up certain linguistic properties including some syntactic and semantic aspects of the sentence whereas Transformer models are largely unaffe

  • Czech name

  • Czech description

Classification

  • Type

    J<sub>ost</sub> - Miscellaneous article in a specialist periodical

  • CEP classification

  • OECD FORD branch

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Result continuities

  • Project

    <a href="/en/project/GA18-02196S" target="_blank" >GA18-02196S: Linguistic Structure Representation in Neural Networks</a><br>

  • Continuities

    P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)

Others

  • Publication year

    2020

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Name of the periodical

    The Prague Bulletin of Mathematical Linguistics

  • ISSN

    0032-6585

  • e-ISSN

  • Volume of the periodical

    115

  • Issue of the periodical within the volume

    1

  • Country of publishing house

    CZ - CZECH REPUBLIC

  • Number of pages

    20

  • Pages from-to

    143-162

  • UT code for WoS article

  • EID of the result in the Scopus database