All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

Comparing RNN and Transformer Context Representations in the Czech Answer Selection Task

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216224%3A14330%2F22%3A00125094" target="_blank" >RIV/00216224:14330/22:00125094 - isvavai.cz</a>

  • Result on the web

    <a href="http://dx.doi.org/10.5220/0000155600003116" target="_blank" >http://dx.doi.org/10.5220/0000155600003116</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.5220/0000155600003116" target="_blank" >10.5220/0000155600003116</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    Comparing RNN and Transformer Context Representations in the Czech Answer Selection Task

  • Original language description

    Open domain question answering now inevitably builds upon advanced neural models processing large unstructured textual sources serving as a kind of underlying knowledge base. In case of non-mainstream highly- inflected languages, the state-of-the-art approaches lack large training datasets emphasizing the need for other improvement techniques. In this paper, we present detailed evaluation of a new technique employing various context representations in the answer selection task where the best answer sentence from a candidate document is identified as the most relevant to the human entered question. The input data here consists not only of each sentence in isolation but also of its preceding sentence(s) as the context. We compare seven different context representations including direct recurrent network (RNN) embeddings and several BERT-model based sentence embedding vectors. All experiments are evaluated with a new version 3.1 of the Czech question answering benchmark dataset SQAD wit h possible multiple correct answers as a new feature. The comparison shows that the BERT-based sentence embeddings are able to offer the best context representations reaching the mean average precision results of 83.39% which is a new best score for this dataset.

  • Czech name

  • Czech description

Classification

  • Type

    D - Article in proceedings

  • CEP classification

  • OECD FORD branch

    10200 - Computer and information sciences

Result continuities

  • Project

    <a href="/en/project/LM2018101" target="_blank" >LM2018101: Digital Research Infrastructure for the Language Technologies, Arts and Humanities</a><br>

  • Continuities

    P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)<br>S - Specificky vyzkum na vysokych skolach

Others

  • Publication year

    2022

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Article name in the collection

    Proceedings of the 14th International Conference on Agents and Artificial Intelligence (ICAART)

  • ISBN

    9789897585470

  • ISSN

  • e-ISSN

  • Number of pages

    7

  • Pages from-to

    388-394

  • Publisher name

    SCITEPRESS

  • Place of publication

    Portugal

  • Event location

    Portugal

  • Event date

    Jan 1, 2022

  • Type of event by nationality

    CST - Celostátní akce

  • UT code for WoS article