Comparison of text preprocessing methods
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F22%3AHXXS9Z4V" target="_blank" >RIV/00216208:11320/22:HXXS9Z4V - isvavai.cz</a>
Nalezeny alternativní kódy
RIV/00216208:11320/23:PHNVYRMF
Výsledek na webu
<a href="http://www.cambridge.org/core/journals/natural-language-engineering/article/comparison-of-text-preprocessing-methods/43A20821D65F1C0C4366B126FC794AE3" target="_blank" >http://www.cambridge.org/core/journals/natural-language-engineering/article/comparison-of-text-preprocessing-methods/43A20821D65F1C0C4366B126FC794AE3</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1017/S1351324922000213" target="_blank" >10.1017/S1351324922000213</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Comparison of text preprocessing methods
Popis výsledku v původním jazyce
Text preprocessing is not only an essential step to prepare the corpus for modeling but also a key area that directly affects the natural language processing (NLP) application results. For instance, precise tokenization increases the accuracy of part-of-speech (POS) tagging, and retaining multiword expressions improves reasoning and machine translation. The text corpus needs to be appropriately preprocessed before it is ready to serve as the input to computer models. The preprocessing requirements depend on both the nature of the corpus and the NLP application itself, that is, what researchers would like to achieve from analyzing the data. Conventional text preprocessing practices generally suffice, but there exist situations where the text preprocessing needs to be customized for better analysis results. Hence, we discuss the pros and cons of several common text preprocessing methods: removing formatting, tokenization, text normalization, handling punctuation, removing stopwords, stemming and lemmatization, n-gramming, and identifying multiword expressions. Then, we provide examples of text datasets which require special preprocessing and how previous researchers handled the challenge. We expect this article to be a starting guideline on how to select and fine-tune text preprocessing methods.
Název v anglickém jazyce
Comparison of text preprocessing methods
Popis výsledku anglicky
Text preprocessing is not only an essential step to prepare the corpus for modeling but also a key area that directly affects the natural language processing (NLP) application results. For instance, precise tokenization increases the accuracy of part-of-speech (POS) tagging, and retaining multiword expressions improves reasoning and machine translation. The text corpus needs to be appropriately preprocessed before it is ready to serve as the input to computer models. The preprocessing requirements depend on both the nature of the corpus and the NLP application itself, that is, what researchers would like to achieve from analyzing the data. Conventional text preprocessing practices generally suffice, but there exist situations where the text preprocessing needs to be customized for better analysis results. Hence, we discuss the pros and cons of several common text preprocessing methods: removing formatting, tokenization, text normalization, handling punctuation, removing stopwords, stemming and lemmatization, n-gramming, and identifying multiword expressions. Then, we provide examples of text datasets which require special preprocessing and how previous researchers handled the challenge. We expect this article to be a starting guideline on how to select and fine-tune text preprocessing methods.
Klasifikace
Druh
J<sub>imp</sub> - Článek v periodiku v databázi Web of Science
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
—
Návaznosti
—
Ostatní
Rok uplatnění
2022
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název periodika
Natural Language Engineering
ISSN
1351-3249
e-ISSN
1469-8110
Svazek periodika
28
Číslo periodika v rámci svazku
2022-6-13
Stát vydavatele periodika
GB - Spojené království Velké Británie a Severního Irska
Počet stran výsledku
45
Strana od-do
1-45
Kód UT WoS článku
000809676200001
EID výsledku v databázi Scopus
—