Language Model Based Unsupervised Dependency Parsing with Conditional Mutual Information and Grammatical Constraints
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F25%3ABI2WUB95" target="_blank" >RIV/00216208:11320/25:BI2WUB95 - isvavai.cz</a>
Result on the web
<a href="https://www.scopus.com/inward/record.uri?eid=2-s2.0-85199510908&partnerID=40&md5=9842e4d36bf60c0cab8583f6264231b6" target="_blank" >https://www.scopus.com/inward/record.uri?eid=2-s2.0-85199510908&partnerID=40&md5=9842e4d36bf60c0cab8583f6264231b6</a>
DOI - Digital Object Identifier
—
Alternative languages
Result language
angličtina
Original language name
Language Model Based Unsupervised Dependency Parsing with Conditional Mutual Information and Grammatical Constraints
Original language description
Previous methods based on Large Language Models (LLM) perform unsupervised dependency parsing by maximizing bi-lexical dependence scores. However, these previous methods adopt dependence scores that are difficult to interpret. Furthermore, these methods cannot incorporate grammatical constraints that previous grammar-based parsing research has shown beneficial to improving parsing performance. In this work, we apply Conditional Mutual Information (CMI), an interpretable metric, to measure the bi-lexical dependence and incorporate grammatical constraints into LLM-based unsupervised parsing. We incorporate Part-Of-Speech information as a grammatical constraint at the CMI estimation stage and integrate two additional grammatical constraints at the subsequent tree decoding stage. We find that the CMI score positively correlates with syntactic dependencies and has a stronger correlation with the syntactic dependency than baseline scores. Our experiment confirms the effectiveness and applicability of the proposed grammatical constraints across five languages and eight datasets. The CMI parsing model outperforms state-of-the-art LLM-based models and similarly constrained grammar-based models. Our analysis reveals that the CMI model is strong in retrieving dependency relations with rich lexical interactions but is weak in retrieving relations with sparse lexical interactions, indicating a potential limitation in CMI-based unsupervised parsing methods. © 2024 Association for Computational Linguistics.
Czech name
—
Czech description
—
Classification
Type
D - Article in proceedings
CEP classification
—
OECD FORD branch
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Result continuities
Project
—
Continuities
—
Others
Publication year
2024
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Article name in the collection
Proc. Conf. North American Chapter Assoc. Comput. Linguist.: Hum. Lang. Technol., NAACL
ISBN
979-889176114-8
ISSN
—
e-ISSN
—
Number of pages
12
Pages from-to
6355-6366
Publisher name
Association for Computational Linguistics (ACL)
Place of publication
—
Event location
Mexico City
Event date
Jan 1, 2025
Type of event by nationality
WRD - Celosvětová akce
UT code for WoS article
—