<div class="csl-bib-body">
<div class="csl-entry">Lederer, I., Mayer, R., & Rauber, A. (2023). Identifying Appropriate Intellectual Property Protection Mechanisms for Machine Learning Models: A Systematization of Watermarking, Fingerprinting, Model Access, and Attacks. <i>IEEE Transactions on Neural Networks and Learning Systems</i>. https://doi.org/10.1109/TNNLS.2023.3270135</div>
</div>
-
dc.identifier.issn
2162-237X
-
dc.identifier.uri
http://hdl.handle.net/20.500.12708/189838
-
dc.description.abstract
The commercial use of machine learning (ML) is spreading; at the same time, ML models are becoming more complex and more expensive to train, which makes intellectual property protection (IPP) of trained models a pressing issue. Unlike other domains that can build on a solid understanding of the threats, attacks, and defenses available to protect their IP, ML-related research in this regard is still very fragmented. This is also due to a missing unified view as well as a common taxonomy of these aspects. In this article, we systematize our findings on IPP in ML while focusing on threats and attacks identified and defenses proposed at the time of writing. We develop a comprehensive threat model for IP in ML, categorizing attacks and defenses within a unified and consolidated taxonomy, thus bridging research from both the ML and security communities.
en
dc.language.iso
en
-
dc.publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
-
dc.relation.ispartof
IEEE Transactions on Neural Networks and Learning Systems
-
dc.subject
Attacks on intellectual property protection (IPP)
en
dc.subject
IPP
en
dc.subject
machine learning (ML)
en
dc.subject
model access control
en
dc.subject
Watermarking
en
dc.title
Identifying Appropriate Intellectual Property Protection Mechanisms for Machine Learning Models: A Systematization of Watermarking, Fingerprinting, Model Access, and Attacks