Inducing Systematicity in Transformers by Attending to Structurally Quantized Embeddings
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F25%3AYBRRMRQA" target="_blank" >RIV/00216208:11320/25:YBRRMRQA - isvavai.cz</a>
Výsledek na webu
<a href="https://www.scopus.com/inward/record.uri?eid=2-s2.0-85204459093&partnerID=40&md5=60c1c0c5d1df214472490765afd3fade" target="_blank" >https://www.scopus.com/inward/record.uri?eid=2-s2.0-85204459093&partnerID=40&md5=60c1c0c5d1df214472490765afd3fade</a>
DOI - Digital Object Identifier
—
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Inducing Systematicity in Transformers by Attending to Structurally Quantized Embeddings
Popis výsledku v původním jazyce
Transformers generalize to novel compositions of structures and entities after being trained on a complex dataset, but easily overfit on datasets of insufficient complexity. We observe that when the training set is sufficiently complex, the model encodes structurally equivalent sentences using a systematic attention pattern. Inspired by this observation, we propose SQ-Transformer (Structurally Quantized) that explicitly encourages systematicity in the embeddings and attention layers even with low-complexity data. At the embedding level, we introduce Structure-oriented Vector Quantization (SoVQ) to cluster word embeddings into several classes of structurally equivalent entities. At the attention level, we devise the Systematic Attention Layer (SAL) and an alternative, Systematically Regularized Layer (SRL) that operate on the quantized word embeddings so that sentences of the same structure are encoded with invariant or similar attention patterns. Empirically, we show SQ-Transformer achieves stronger compositional generalization than the vanilla Transformer on multiple low-complexity semantic parsing and machine translation datasets. In our analysis, we show SoVQ indeed learns a syntactically clustered embedding space, and SAL/SRL induces generalizable attention patterns, altogether leading to improved systematicity. © 2024 Association for Computational Linguistics.
Název v anglickém jazyce
Inducing Systematicity in Transformers by Attending to Structurally Quantized Embeddings
Popis výsledku anglicky
Transformers generalize to novel compositions of structures and entities after being trained on a complex dataset, but easily overfit on datasets of insufficient complexity. We observe that when the training set is sufficiently complex, the model encodes structurally equivalent sentences using a systematic attention pattern. Inspired by this observation, we propose SQ-Transformer (Structurally Quantized) that explicitly encourages systematicity in the embeddings and attention layers even with low-complexity data. At the embedding level, we introduce Structure-oriented Vector Quantization (SoVQ) to cluster word embeddings into several classes of structurally equivalent entities. At the attention level, we devise the Systematic Attention Layer (SAL) and an alternative, Systematically Regularized Layer (SRL) that operate on the quantized word embeddings so that sentences of the same structure are encoded with invariant or similar attention patterns. Empirically, we show SQ-Transformer achieves stronger compositional generalization than the vanilla Transformer on multiple low-complexity semantic parsing and machine translation datasets. In our analysis, we show SoVQ indeed learns a syntactically clustered embedding space, and SAL/SRL induces generalizable attention patterns, altogether leading to improved systematicity. © 2024 Association for Computational Linguistics.
Klasifikace
Druh
D - Stať ve sborníku
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
—
Návaznosti
—
Ostatní
Rok uplatnění
2024
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název statě ve sborníku
Proc. Annu. Meet. Assoc. Comput Linguist.
ISBN
979-889176094-3
ISSN
0736-587X
e-ISSN
—
Počet stran výsledku
24
Strana od-do
8360-8383
Název nakladatele
Association for Computational Linguistics (ACL)
Místo vydání
—
Místo konání akce
Bangkok
Datum konání akce
1. 1. 2025
Typ akce podle státní příslušnosti
WRD - Celosvětová akce
Kód UT WoS článku
—