Beiser, A., Martinelli, F., Gerstner, W., & Brea, J. (2025). Data Augmentation Techniques to Reverse-Engineer Neural Network Weights from Input-Output Queries. arXiv. https://doi.org/10.48550/arXiv.2511.20312
E192-02 - Forschungsbereich Databases and Artificial Intelligence E056-23 - Fachbereich Innovative Combinations and Applications of AI and ML (iCAIML)
-
ArXiv ID:
2511.20312
-
Date (published):
25-Nov-2025
-
Number of Pages:
13
-
Preprint Server:
arXiv
-
Keywords:
Reverse Engineering; Data Augmentation; Parameter Recovery; Network Reconstruction; Interpretability; Teacher-Student
en
Abstract:
Network weights can be reverse-engineered given enough informative samples of a network's input-output function. In a teacher-student setup, this translates into collecting a dataset of the teacher mapping -- querying the teacher -- and fitting a student to imitate such mapping. A sensible choice of queries is the dataset the teacher is trained on. But current methods fail when the teacher parameters are more numerous than the training data, because the student overfits to the queries instead of aligning its parameters to the teacher. In this work, we explore augmentation techniques to best sample the input-output mapping of a teacher network, with the goal of eliciting a rich set of representations from the teacher hidden layers. We discover that standard augmentations such as rotation, flipping, and adding noise, bring little to no improvement to the identification problem. We design new data augmentation techniques tailored to better sample the representational space of the network's hidden layers. With our augmentations we extend the state-of-the-art range of recoverable network sizes. To test their scalability, we show that we can recover networks of up to 100 times more parameters than training data-points.