<div class="csl-bib-body">
<div class="csl-entry">Naseer, M. (2024). <i>Establishing formal behavioral guarantees for trained neural networks : towards reliable machine learning systems</i> [Dissertation, Technische Universität Wien]. reposiTUm. https://doi.org/10.34726/hss.2025.128782</div>
</div>
-
dc.identifier.uri
https://doi.org/10.34726/hss.2025.128782
-
dc.identifier.uri
http://hdl.handle.net/20.500.12708/209186
-
dc.description.abstract
Machine Learning (ML)-based systems, particularly those deploying deep neural networks (DNNs), are widely adopted into real-world applications due to their ability to be trained without being explicitly programmed and high output accuracy. However, despite their high classification accuracy and optimal decision-making in testing scenarios, they are often found to be vulnerable under unseen (but realistic) inputs. This points to the lack of generalization of these data-driven models under unseen input scenarios, hence highlighting the need for behavioral guarantees to ensure their reliable classification and decision-making in the real world. Research efforts constantly provide empirical evidence for the lack of reliable DNN behavior (under seed inputs) for various ML applications. Orthogonally, formal efforts attempt to provide concise formal guarantees for behavioral properties/specifications like robustness and safety to hold for the DNN models. However, due to the scalability challenges associated with formal methods, not only are these efforts often restricted to providing qualitative (binary) guarantees but they also focus only on limited DNN behaviors and verification techniques.To address the aforementioned limitations, this research provides model checking and scalable sampling-based formal frameworks for DNN analysis, focusing on a diverse range of DNN behavioral specifications. These include DNN noise tolerance, input node sensitivity (to noise), node robustness bias, robustness under constrained noise, robustness bias against tail classes and safety under bounded inputs. Realistic noise modeling is proposed for practical DNN analysis, while restraining from the use of unrealistic assumptions during analysis. These lead to formal guarantees that may potentially be leveraged to identify reliable ML systems. The research additionally leverages our DNN analysis to improve training for robust DNNs. The resulting frameworks designed and developed during the research are all accompanied by case studies based on DNNs trained on real-world datasets, hence supporting the efficacy of the research efforts and provide behavioral guarantees essential to ensure more reliable ML systems in practice.
en
dc.language
English
-
dc.language.iso
en
-
dc.rights.uri
http://rightsstatements.org/vocab/InC/1.0/
-
dc.subject
Neural Networks
en
dc.subject
Formal Methods
en
dc.subject
Model Checking
en
dc.subject
Stratified Sampling
en
dc.subject
Relative Noise
en
dc.subject
Robustness
en
dc.subject
Bias
en
dc.subject
Input Sensitivity
en
dc.subject
Safety
en
dc.subject
GPU
en
dc.title
Establishing formal behavioral guarantees for trained neural networks : towards reliable machine learning systems
en
dc.type
Thesis
en
dc.type
Hochschulschrift
de
dc.rights.license
In Copyright
en
dc.rights.license
Urheberrechtsschutz
de
dc.identifier.doi
10.34726/hss.2025.128782
-
dc.contributor.affiliation
TU Wien, Österreich
-
dc.rights.holder
Mahum Naseer
-
dc.publisher.place
Wien
-
tuw.version
vor
-
tuw.thesisinformation
Technische Universität Wien
-
tuw.publication.orgunit
E191 - Institut für Computer Engineering
-
dc.type.qualificationlevel
Doctoral
-
dc.identifier.libraryid
AC17413466
-
dc.description.numberOfPages
131
-
dc.thesistype
Dissertation
de
dc.thesistype
Dissertation
en
dc.rights.identifier
In Copyright
en
dc.rights.identifier
Urheberrechtsschutz
de
tuw.advisor.staffStatus
staff
-
item.languageiso639-1
en
-
item.grantfulltext
open
-
item.openairetype
doctoral thesis
-
item.openaccessfulltext
Open Access
-
item.mimetype
application/pdf
-
item.openairecristype
http://purl.org/coar/resource_type/c_db06
-
item.cerifentitytype
Publications
-
item.fulltext
with Fulltext
-
crisitem.author.dept
E191-01 - Forschungsbereich Cyber-Physical Systems