A deep learning approach to building a framework for Urdu POS and NER
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F23%3AS6XVG72I" target="_blank" >RIV/00216208:11320/23:S6XVG72I - isvavai.cz</a>
Result on the web
<a href="https://www.scopus.com/inward/record.uri?eid=2-s2.0-85148060826&doi=10.3233%2fJIFS-211275&partnerID=40&md5=d3e6bc04e5767e2756639ec499867882" target="_blank" >https://www.scopus.com/inward/record.uri?eid=2-s2.0-85148060826&doi=10.3233%2fJIFS-211275&partnerID=40&md5=d3e6bc04e5767e2756639ec499867882</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.3233/jifs-211275" target="_blank" >10.3233/jifs-211275</a>
Alternative languages
Result language
angličtina
Original language name
A deep learning approach to building a framework for Urdu POS and NER
Original language description
"The study examines various studies on Named Entity Recognition (NER) and Part of Speech (POS) tagging for the Urdu language conducted by academics and researchers. POS and NER tagging for Urdu still faces obstacles in terms of increasing accuracy while lowering false-positive rates and labelling unknown terms, despite the efforts of numerous researchers. In addition, ambiguity exists when tagging terms with different contextual meanings within a sentence. Due to the fact that Urdu is an inflectional, derivational, morphologically rich, and context-sensitive language, the existing models, such as Linguistic rule application, N-gram Markov model, Tree Tagger, random forest (RF) tagger, etc., were unable to produce accurate experimental results on Urdu language data. The significance of this study is that it fills a gap in the literature concerning the lack of POS and NER tagging for the Urdu language. For Urdu POS and NER tagging, we propose a deep learning model with a well-balanced set of language-independent features as well as a survey of important Urdu POS/NER techniques. In addition, this is the first study to use residual biDirectional residual Long short-term memory (residual biLSTM) architecture trained on the Urmono dataset in conjunction with the randomly initialised word2vec, fastText and mBERT embeddings are utilised to generate word or character vectors.For each experiment, the paper also employs the evaluation methods of Macro-F1, precision, precision, and recall. The proposed method with mbert embedding as word vectors provides best results of F1 score for POS and NER at 91.11% and 99.11% respectively. Also, the accuracy, precision and recall for POS are reported at 94.85%, 91.79% and 90.77%. Similarly, the accuracy, precision and recall for NER of the proposed model are reported at 99.77%, 98.78% and 99.45% respectively, which are higher than baseline models. © 2023 - IOS Press. All rights reserved."
Czech name
—
Czech description
—
Classification
Type
J<sub>SC</sub> - Article in a specialist periodical, which is included in the SCOPUS database
CEP classification
—
OECD FORD branch
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Result continuities
Project
—
Continuities
—
Others
Publication year
2023
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Name of the periodical
"Journal of Intelligent and Fuzzy Systems"
ISSN
1064-1246
e-ISSN
—
Volume of the periodical
44
Issue of the periodical within the volume
2
Country of publishing house
US - UNITED STATES
Number of pages
11
Pages from-to
3341-3351
UT code for WoS article
—
EID of the result in the Scopus database
2-s2.0-85148060826