<div class="csl-bib-body">
<div class="csl-entry">Ordyniak, S., Paesani, G., Rychlicki, M., & Szeider, S. (2026). A General Theoretical Framework for Learning Smallest Interpretable Models. <i>Artificial Intelligence</i>, <i>350</i>, Article 104441. https://doi.org/10.1016/j.artint.2025.104441</div>
</div>
-
dc.identifier.issn
0004-3702
-
dc.identifier.uri
http://hdl.handle.net/20.500.12708/225230
-
dc.description.abstract
We develop a general algorithmic framework that allows us to obtain fixed-parameter tractability for computing smallest symbolic models that represent given data. Our framework applies to all ML model types that admit a certain extension property. By establishing this extension property for decision trees, decision sets, decision lists, and binary decision diagrams, we obtain that minimizing these fundamental model types is fixed-parameter tractable. Our framework even applies to ensembles, which combine individual models by majority decision.
en
dc.language.iso
en
-
dc.publisher
ELSEVIER
-
dc.relation.ispartof
Artificial Intelligence
-
dc.subject
ML model
en
dc.subject
NP-hard problem
en
dc.subject
SAT
en
dc.title
A General Theoretical Framework for Learning Smallest Interpretable Models