Vše

Co hledáte?

Vše
Projekty
Výsledky výzkumu
Subjekty

Rychlé hledání

  • Projekty podpořené TA ČR
  • Významné projekty
  • Projekty s nejvyšší státní podporou
  • Aktuálně běžící projekty

Chytré vyhledávání

  • Takto najdu konkrétní +slovo
  • Takto z výsledků -slovo zcela vynechám
  • “Takto můžu najít celou frázi”

Low-rank and global-representation-key-based attention for graph transformer

Identifikátory výsledku

  • Kód výsledku v IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F61989100%3A27240%2F23%3A10253783" target="_blank" >RIV/61989100:27240/23:10253783 - isvavai.cz</a>

  • Výsledek na webu

    <a href="https://www.sciencedirect.com/science/article/pii/S002002552300693X?via%3Dihub" target="_blank" >https://www.sciencedirect.com/science/article/pii/S002002552300693X?via%3Dihub</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.1016/j.ins.2023.119108" target="_blank" >10.1016/j.ins.2023.119108</a>

Alternativní jazyky

  • Jazyk výsledku

    angličtina

  • Název v původním jazyce

    Low-rank and global-representation-key-based attention for graph transformer

  • Popis výsledku v původním jazyce

    Transformer architectures have been applied to graph-specific data such as protein structure and shopper lists, and they perform accurately on graph/node classification and prediction tasks. Researchers have proved that the attention matrix in Transformers has low-rank properties, and the self-attention plays a scoring role in the aggregation function of the Transformers. However, it can not solve the issues such as heterophily and over-smoothing. The low-rank properties and the limitations of Transformers inspire this work to propose a Global Representation (GR) based attention mechanism to alleviate the two heterophily and over-smoothing issues. First, this GR-based model integrates geometric information of the nodes of interest that conveys the structural properties of the graph. Unlike a typical Transformer where a node feature forms a Key, we propose to use GR to construct the Key, which discovers the relation between the nodes and the structural representation of the graph. Next, we present various compositions of GR emanating from nodes of interest and alpha-hop neighbors. Then, we explore this attention property with an extensive experimental test to assess the performance and the possible direction of improvements for future works. Additionally, we provide mathematical proof showing the efficient feature update in our proposed method. Finally, we verify and validate the performance of the model on eight benchmark datasets that show the effectiveness of the proposed method.

  • Název v anglickém jazyce

    Low-rank and global-representation-key-based attention for graph transformer

  • Popis výsledku anglicky

    Transformer architectures have been applied to graph-specific data such as protein structure and shopper lists, and they perform accurately on graph/node classification and prediction tasks. Researchers have proved that the attention matrix in Transformers has low-rank properties, and the self-attention plays a scoring role in the aggregation function of the Transformers. However, it can not solve the issues such as heterophily and over-smoothing. The low-rank properties and the limitations of Transformers inspire this work to propose a Global Representation (GR) based attention mechanism to alleviate the two heterophily and over-smoothing issues. First, this GR-based model integrates geometric information of the nodes of interest that conveys the structural properties of the graph. Unlike a typical Transformer where a node feature forms a Key, we propose to use GR to construct the Key, which discovers the relation between the nodes and the structural representation of the graph. Next, we present various compositions of GR emanating from nodes of interest and alpha-hop neighbors. Then, we explore this attention property with an extensive experimental test to assess the performance and the possible direction of improvements for future works. Additionally, we provide mathematical proof showing the efficient feature update in our proposed method. Finally, we verify and validate the performance of the model on eight benchmark datasets that show the effectiveness of the proposed method.

Klasifikace

  • Druh

    J<sub>imp</sub> - Článek v periodiku v databázi Web of Science

  • CEP obor

  • OECD FORD obor

    10200 - Computer and information sciences

Návaznosti výsledku

  • Projekt

  • Návaznosti

    S - Specificky vyzkum na vysokych skolach

Ostatní

  • Rok uplatnění

    2023

  • Kód důvěrnosti údajů

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Údaje specifické pro druh výsledku

  • Název periodika

    Information sciences

  • ISSN

    0020-0255

  • e-ISSN

    1872-6291

  • Svazek periodika

    642

  • Číslo periodika v rámci svazku

    září 2023

  • Stát vydavatele periodika

    US - Spojené státy americké

  • Počet stran výsledku

    17

  • Strana od-do

  • Kód UT WoS článku

    000998393300001

  • EID výsledku v databázi Scopus