Are the Multilingual Models Better? Improving Czech Sentiment with Transformers
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F49777513%3A23520%2F21%3A43962571" target="_blank" >RIV/49777513:23520/21:43962571 - isvavai.cz</a>
Výsledek na webu
<a href="https://aclanthology.org/2021.ranlp-1.128/" target="_blank" >https://aclanthology.org/2021.ranlp-1.128/</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.26615/978-954-452-072-4_128" target="_blank" >10.26615/978-954-452-072-4_128</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Are the Multilingual Models Better? Improving Czech Sentiment with Transformers
Popis výsledku v původním jazyce
In this paper, we aim at improving Czech sentiment with transformer-based models and their multilingual versions. More concretely, we study the task of polarity detection for the Czech language on three sentiment polarity datasets. We fine-tune and perform experiments with five multilingual and three monolingual models. We compare the monolingual and multilingual models' performance, including comparison with the older approach based on recurrent neural networks. Furthermore, we test the multilingual models and their ability to transfer knowledge from English to Czech (and vice versa) with zero-shot cross-lingual classification. Our experiments show that the huge multilingual models can overcome the performance of the monolingual models. They are also able to detect polarity in another language without any training data, with performance not worse than 4.4 % compared to state-of-the-art monolingual trained models. Moreover, we achieved new state-of-the-art results on all three datasets.
Název v anglickém jazyce
Are the Multilingual Models Better? Improving Czech Sentiment with Transformers
Popis výsledku anglicky
In this paper, we aim at improving Czech sentiment with transformer-based models and their multilingual versions. More concretely, we study the task of polarity detection for the Czech language on three sentiment polarity datasets. We fine-tune and perform experiments with five multilingual and three monolingual models. We compare the monolingual and multilingual models' performance, including comparison with the older approach based on recurrent neural networks. Furthermore, we test the multilingual models and their ability to transfer knowledge from English to Czech (and vice versa) with zero-shot cross-lingual classification. Our experiments show that the huge multilingual models can overcome the performance of the monolingual models. They are also able to detect polarity in another language without any training data, with performance not worse than 4.4 % compared to state-of-the-art monolingual trained models. Moreover, we achieved new state-of-the-art results on all three datasets.
Klasifikace
Druh
D - Stať ve sborníku
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
—
Návaznosti
S - Specificky vyzkum na vysokych skolach
Ostatní
Rok uplatnění
2021
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název statě ve sborníku
Deep Learning for Natural Language Processing Methods and Applications
ISBN
978-954-452-072-4
ISSN
1313-8502
e-ISSN
2603-2813
Počet stran výsledku
12
Strana od-do
1138-1149
Název nakladatele
INCOMA Ltd.
Místo vydání
Shoumen
Místo konání akce
online
Datum konání akce
1. 9. 2021
Typ akce podle státní příslušnosti
WRD - Celosvětová akce
Kód UT WoS článku
—