Multi-word units (and tokenization more generally): a multi-dimensional and largely information-theoretic approach
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F22%3APB6DUZ9M" target="_blank" >RIV/00216208:11320/22:PB6DUZ9M - isvavai.cz</a>
Result on the web
<a href="https://journals.openedition.org/lexis/6231" target="_blank" >https://journals.openedition.org/lexis/6231</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.4000/lexis.6231" target="_blank" >10.4000/lexis.6231</a>
Alternative languages
Result language
angličtina
Original language name
Multi-word units (and tokenization more generally): a multi-dimensional and largely information-theoretic approach
Original language description
It has been argued that most of corpus linguistics involves one of four fundamental methods: frequency lists, dispersion, collocation, and concordancing. All these presuppose (if only implicitly) the definition of a unit: the element whose frequency in a corpus, in corpus parts, or around a search word are counted (or quantified in other ways). Usually and with most corpus-processing tools, a unit is an orthographic word. However, it is obvious that this is a simplifying assumption borne out of convenience: clearly, it seems more intuitive to consider because of or in spite of as one unit each rather than two or three. Some work in computational linguistics has developed multi-word unit (MWU) identification algorithms, which typically involve co-occurrence token frequencies and association measures (AMs), but these have not become widespread in corpus-linguistic practice despite the fact that recognizing MWUs like the above will have a profound impact on just about all corpus statistics that involve (simplistic notions of) words/units. In this programmatic proof-of-concept paper, I introduce and exemplify an algorithm to identify MWUs that goes beyond frequency and bidirectional association by also involving several well-known but underutilized dimensions of corpus-linguistic information: frequency: how often does a potential unit (like in_spite_of) occur?; dispersion: how widespread is the use of a potential unit?; association: how strongly attracted are the parts of a potential unit?; entropy: how variable is each slot in a potential unit? The proposed algorithm can use all these dimensions and weight them differently. I will (i) present the algorithm in detail, (ii) exemplify its application to the Brown corpus, (iii) discuss its results on the basis of several kinds of MWUs it returns, and (iv) discuss next analytical steps.
Czech name
—
Czech description
—
Classification
Type
J<sub>SC</sub> - Article in a specialist periodical, which is included in the SCOPUS database
CEP classification
—
OECD FORD branch
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Result continuities
Project
—
Continuities
—
Others
Publication year
2022
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Name of the periodical
Lexis - Journal in English Lexicology [online]
ISSN
1951-6215
e-ISSN
1951-6215
Volume of the periodical
—
Issue of the periodical within the volume
19
Country of publishing house
FR - FRANCE
Number of pages
23
Pages from-to
1-23
UT code for WoS article
—
EID of the result in the Scopus database
2-s2.0-85128430203