Multi Task Learning Based Shallow Parsing for Indian Languages
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F25%3AACB8YER8" target="_blank" >RIV/00216208:11320/25:ACB8YER8 - isvavai.cz</a>
Result on the web
<a href="https://www.scopus.com/inward/record.uri?eid=2-s2.0-85204884735&doi=10.1145%2f3664620&partnerID=40&md5=7943cf41bdc765d7ece1f5291ac66d7d" target="_blank" >https://www.scopus.com/inward/record.uri?eid=2-s2.0-85204884735&doi=10.1145%2f3664620&partnerID=40&md5=7943cf41bdc765d7ece1f5291ac66d7d</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1145/3664620" target="_blank" >10.1145/3664620</a>
Alternative languages
Result language
angličtina
Original language name
Multi Task Learning Based Shallow Parsing for Indian Languages
Original language description
Shallow Parsing is an important step for many Natural Language Processing tasks. Although shallow parsing has a rich history for resource rich languages, it is not the case for most Indian languages. Shallow Parsing consists of POS Tagging and Chunking. Our study focuses on developing shallow parsers for Indian languages. As part of shallow parsing, we included morph analysis as well. For the study, we first consolidated available shallow parsing corpora for seven Indian Languages (Hindi, Kannada, Bangla, Malayalam, Marathi, Urdu, Telugu) for which treebanks are publicly available. We then trained models to achieve state-of-the-art performance for shallow parsing in these languages for multiple domains. Since analyzing the performance of model predictions at sentence level is more realistic, we report the performance of these shallow parsers not only at the token level, but also at the sentence level. We also present machine learning techniques for multi-task shallow parsing. Our experiments show that fine-tuned contextual embedding with multi-task learning improves the performance of multiple as well as individual shallow parsing tasks across different domains. We show the transfer learning capability of these models by creating shallow parsers (only with POS and Chunk) for Gujarati, Odia, and Punjabi for which no treebanks are available. As a part of this work, we will be releasing the Indian Languages Shallow Linguistic (ILSL) benchmarks for 10 Indian languages, including both the major language families Indo-Aryan and Dravidian as common building blocks that can be used to evaluate and understand various linguistic phenomena found in Indian languages and how well newer approaches can tackle them.
Czech name
—
Czech description
—
Classification
Type
J<sub>SC</sub> - Article in a specialist periodical, which is included in the SCOPUS database
CEP classification
—
OECD FORD branch
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Result continuities
Project
—
Continuities
—
Others
Publication year
2024
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Name of the periodical
ACM Transactions on Asian and Low-Resource Language Information Processing
ISSN
2375-4699
e-ISSN
—
Volume of the periodical
23
Issue of the periodical within the volume
9
Country of publishing house
US - UNITED STATES
Number of pages
18
Pages from-to
1-18
UT code for WoS article
—
EID of the result in the Scopus database
2-s2.0-85204884735