Bao, L., Jin, E., Bronstein, M. M., Ceylan, I. I., & Lanzinger, M. P. (2025). Homomorphism Counts as Structural Encodings for Graph Learning. In The Thirteenth International Conference on Learning Representations : ICLR 2025 (pp. 1–29).
E192-02 - Forschungsbereich Databases and Artificial Intelligence
-
Published in:
The Thirteenth International Conference on Learning Representations : ICLR 2025
-
Date (published):
2025
-
Event name:
Thirteenth International Conference on Learning Representations
en
Event date:
24-Apr-2025 - 28-Apr-2025
-
Event place:
Singapore
-
Number of Pages:
29
-
Peer reviewed:
Yes
-
Keywords:
graph transformers; graph learning; positional encodings; deep learning
en
Abstract:
Graph Transformers are popular neural networks that extend the well-known Transformer architecture to the graph domain. These architectures operate by applying self-attention on graph nodes and incorporating graph structure through the use of positional encodings (e.g., Laplacian positional encoding) or structural encodings (e.g., random-walk structural encoding). The quality of such encodings is critical, since they provide the necessary \emph{graph inductive biases} to condition the model on graph structure. In this work, we propose \emph{motif structural encoding} (MoSE) as a flexible and powerful structural encoding framework based on counting graph homomorphisms. Theoretically, we compare the expressive power of MoSE to random-walk structural encoding and relate both encodings to the expressive power of standard message passing neural networks. Empirically, we observe that MoSE outperforms other well-known positional and structural encodings across a range of architectures, and it achieves state-of-the-art performance on a widely studied molecular property prediction dataset.
en
https://openreview.net/forum?id=qFw2RFJS5g
-
Project title:
Decompose and Conquer: Fast Query Processing via Decomposition: ICT22-011 (WWTF Wiener Wissenschafts-, Forschu und Technologiefonds)