<div class="csl-bib-body">
<div class="csl-entry">Bressan, M., Cesa-Bianchi, N., Esposito, E., Mansour, Y., Moran, S., & Thiessen, M. (2024). A Theory of Interpretable Approximations. In S. Agrawal & A. Roth (Eds.), <i>Proceedings of Thirty Seventh Conference on Learning Theory</i>. http://hdl.handle.net/20.500.12708/199886</div>
</div>
-
dc.identifier.uri
http://hdl.handle.net/20.500.12708/199886
-
dc.description.abstract
Can a deep neural network be approximated by a small decision tree based on simple features? This question and its variants are behind the growing demand for machine learning models that are \emph{interpretable} by humans. In this work we study such questions by introducing \emph{interpretable approximations}, a notion that captures the idea of approximating a target concept $c$ by a small aggregation of concepts from some base class $\mathcal{H}$. In particular, we consider the approximation of a binary concept $c$ by decision trees based on a simple class $\mathcal{H}$ (e.g., of bounded VC dimension), and use the tree depth as a measure of complexity. Our primary contribution is the following remarkable trichotomy. For any given pair of $\mathcal{H}$ and $c$, exactly one of these cases holds: (i) $c$ cannot be approximated by $\mathcal{H}$ with arbitrary accuracy; (ii) $c$ can be approximated by $\mathcal{H}$ with arbitrary accuracy, but there exists no universal rate that bounds the complexity of the approximations as a function of the accuracy; or (iii) there exists a constant $\kappa$ that depends only on $\mathcal{H}$ and $c$ such that, for \emph{any} data distribution and \emph{any} desired accuracy level, $c$ can be approximated by $\mathcal{H}$ with a complexity not exceeding $\kappa$. This taxonomy stands in stark contrast to the landscape of supervised classification, which offers a complex array of distribution-free and universally learnable scenarios. We show that, in the case of interpretable approximations, even a slightly nontrivial a-priori guarantee on the complexity of approximations implies approximations with constant (distribution-free and accuracy-free) complexity. We extend our trichotomy to classes $\mathcal{H}$ of unbounded VC dimension and give characterizations of interpretability based on the algebra generated by $\mathcal{H}$.
en
dc.language.iso
en
-
dc.relation.ispartofseries
Proceedings of Machine Learning Research
-
dc.rights.uri
http://rightsstatements.org/vocab/InC/1.0/
-
dc.subject
Machine Learning
en
dc.subject
Interpretability
en
dc.subject
Learning Theory
en
dc.subject
Decision Trees
en
dc.title
A Theory of Interpretable Approximations
en
dc.type
Inproceedings
en
dc.type
Konferenzbeitrag
de
dc.rights.license
Urheberrechtsschutz
de
dc.rights.license
In Copyright
en
dc.contributor.affiliation
University of Milan, Italy
-
dc.contributor.affiliation
University of Milan, Italy
-
dc.contributor.affiliation
University of Milan, Italy
-
dc.contributor.affiliation
Tel Aviv University, Israel
-
dc.contributor.affiliation
Technion – Israel Institute of Technology, Israel
-
dc.contributor.editoraffiliation
Columbia University, United States of America (the)
-
dc.contributor.editoraffiliation
University of Pennsylvania, United States of America (the)