Findings of the Shared Task on Multilingual Coreference Resolution
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F22%3A10457094" target="_blank" >RIV/00216208:11320/22:10457094 - isvavai.cz</a>
Nalezeny alternativní kódy
RIV/49777513:23520/22:43965918
Výsledek na webu
<a href="https://aclanthology.org/2022.crac-mcr.1/" target="_blank" >https://aclanthology.org/2022.crac-mcr.1/</a>
DOI - Digital Object Identifier
—
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Findings of the Shared Task on Multilingual Coreference Resolution
Popis výsledku v původním jazyce
This paper presents an overview of the shared task on multilingual coreference resolution associated with the CRAC 2022 workshop. Shared task participants were supposed to develop trainable systems capable of identifying mentions and of clustering the mentions according to identity coreference. The public edition of CorefUD 1.0, which contains 13 datasets for 10 languages was used as the source of training and evaluation data. The CoNLL score used in previous coreference-oriented shared tasks was used as the main evaluation metric. There were 8 coreference prediction systems submitted by 5 participating teams; in addition, there was a competitive transformer-based baseline system provided by the organizers at the beginning of the shared task. The winner system outperformed the baseline by 12 percent points (in terms of the CoNLL scores averaged across all datasets for individual languages).
Název v anglickém jazyce
Findings of the Shared Task on Multilingual Coreference Resolution
Popis výsledku anglicky
This paper presents an overview of the shared task on multilingual coreference resolution associated with the CRAC 2022 workshop. Shared task participants were supposed to develop trainable systems capable of identifying mentions and of clustering the mentions according to identity coreference. The public edition of CorefUD 1.0, which contains 13 datasets for 10 languages was used as the source of training and evaluation data. The CoNLL score used in previous coreference-oriented shared tasks was used as the main evaluation metric. There were 8 coreference prediction systems submitted by 5 participating teams; in addition, there was a competitive transformer-based baseline system provided by the organizers at the beginning of the shared task. The winner system outperformed the baseline by 12 percent points (in terms of the CoNLL scores averaged across all datasets for individual languages).
Klasifikace
Druh
O - Ostatní výsledky
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
Výsledek vznikl pri realizaci vícero projektů. Více informací v záložce Projekty.
Návaznosti
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Ostatní
Rok uplatnění
2022
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů