Ordyniak, S., Paesani, G., Rychlicki, M., & Szeider, S. (2026). A General Theoretical Framework for Learning Smallest Interpretable Models. Artificial Intelligence, 350, Article 104441. https://doi.org/10.1016/j.artint.2025.104441
E192-01 - Forschungsbereich Algorithms and Complexity E056-13 - Fachbereich LogiCS E056-23 - Fachbereich Innovative Combinations and Applications of AI and ML (iCAIML)
-
Journal:
Artificial Intelligence
-
ISSN:
0004-3702
-
Date (published):
Jan-2026
-
Number of Pages:
18
-
Publisher:
ELSEVIER
-
Peer reviewed:
Yes
-
Keywords:
ML model; NP-hard problem; SAT
en
Abstract:
We develop a general algorithmic framework that allows us to obtain fixed-parameter tractability for computing smallest symbolic models that represent given data. Our framework applies to all ML model types that admit a certain extension property. By establishing this extension property for decision trees, decision sets, decision lists, and binary decision diagrams, we obtain that minimizing these fundamental model types is fixed-parameter tractable. Our framework even applies to ensembles, which combine individual models by majority decision.