Inducing Systematicity in Transformers by Attending to Structurally Quantized Embeddings
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F25%3AYBRRMRQA" target="_blank" >RIV/00216208:11320/25:YBRRMRQA - isvavai.cz</a>
Result on the web
<a href="https://www.scopus.com/inward/record.uri?eid=2-s2.0-85204459093&partnerID=40&md5=60c1c0c5d1df214472490765afd3fade" target="_blank" >https://www.scopus.com/inward/record.uri?eid=2-s2.0-85204459093&partnerID=40&md5=60c1c0c5d1df214472490765afd3fade</a>
DOI - Digital Object Identifier
—
Alternative languages
Result language
angličtina
Original language name
Inducing Systematicity in Transformers by Attending to Structurally Quantized Embeddings
Original language description
Transformers generalize to novel compositions of structures and entities after being trained on a complex dataset, but easily overfit on datasets of insufficient complexity. We observe that when the training set is sufficiently complex, the model encodes structurally equivalent sentences using a systematic attention pattern. Inspired by this observation, we propose SQ-Transformer (Structurally Quantized) that explicitly encourages systematicity in the embeddings and attention layers even with low-complexity data. At the embedding level, we introduce Structure-oriented Vector Quantization (SoVQ) to cluster word embeddings into several classes of structurally equivalent entities. At the attention level, we devise the Systematic Attention Layer (SAL) and an alternative, Systematically Regularized Layer (SRL) that operate on the quantized word embeddings so that sentences of the same structure are encoded with invariant or similar attention patterns. Empirically, we show SQ-Transformer achieves stronger compositional generalization than the vanilla Transformer on multiple low-complexity semantic parsing and machine translation datasets. In our analysis, we show SoVQ indeed learns a syntactically clustered embedding space, and SAL/SRL induces generalizable attention patterns, altogether leading to improved systematicity. © 2024 Association for Computational Linguistics.
Czech name
—
Czech description
—
Classification
Type
D - Article in proceedings
CEP classification
—
OECD FORD branch
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Result continuities
Project
—
Continuities
—
Others
Publication year
2024
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Article name in the collection
Proc. Annu. Meet. Assoc. Comput Linguist.
ISBN
979-889176094-3
ISSN
0736-587X
e-ISSN
—
Number of pages
24
Pages from-to
8360-8383
Publisher name
Association for Computational Linguistics (ACL)
Place of publication
—
Event location
Bangkok
Event date
Jan 1, 2025
Type of event by nationality
WRD - Celosvětová akce
UT code for WoS article
—