E192-04 - Forschungsbereich Formal Methods in Systems Engineering E194-06 - Forschungsbereich Machine Learning E056-10 - Fachbereich SecInt-Secure and Intelligent Human-Centric Digital Technologies E056-23 - Fachbereich Innovative Combinations and Applications of AI and ML (iCAIML)
-
Published in:
ICML 2024 Workshop on Mechanistic Interpretability
-
Date (published):
24-Jun-2024
-
Event name:
ICML 2024 Workshop on Mechanistic Interpretability
en
Event date:
27-Jul-2024
-
Event place:
Vienna, Austria
-
Number of Pages:
12
-
Peer reviewed:
Yes
-
Keywords:
Graph Neural Networks; C2; First-Order Logic; Model Distillation
en
Abstract:
We distill a symbolic model from a Graph Neural Network (GNN). Recent results have shown connections between the expressivity of GNNs and the two-variable fragment of first-order logic with counting quantifiers C2. We use decision trees to represent formulas in an extension of C2 and present an algorithm to distill such decision trees from a given GNN model. We test our approach on multiple GNN architectures. The distilled models are interpretable, succinct, and attain similar accuracy to the underlying GNN. Furthermore, when the ground truth is expressible in C2, our approach outperforms the GNN.
en
Project title:
Structured Data Learning with Generalized Similarities: ICT22-059 (WWTF Wiener Wissenschafts-, Forschu und Technologiefonds)
-
Research Areas:
Logic and Computation: 50% Information Systems Engineering: 50%