<div class="csl-bib-body">
<div class="csl-entry">Pichler, G., Romanelli, M., Vega, L., & Piantanida, P. (2023). Perfectly Accurate Membership Inference by a Dishonest Central Server in Federated Learning. <i>IEEE Transactions on Dependable and Secure Computing</i>, 1–8. https://doi.org/10.1109/TDSC.2023.3326230</div>
</div>
-
dc.identifier.issn
1545-5971
-
dc.identifier.uri
http://hdl.handle.net/20.500.12708/189569
-
dc.description.abstract
Federated Learning is expected to provide strong privacy guarantees, as only gradients or model parameters but no plain text training data is ever exchanged either between the clients or between the clients and the central server. In this paper, we challenge this claim by introducing a simple but still very effective membership inference attack algorithm, which relies only on a single training step. In contrast to the popular honest-but-curious model, we investigate a framework with a dishonest central server. Our strategy is applicable to models with ReLU activations and uses the properties of this activation function to achieve perfect accuracy. Empirical evaluation on visual classification tasks with MNIST, CIFAR10, CIFAR100 and CelebA datasets show that our method provides perfect accuracy in identifying one sample in a training set with thousands of samples. Occasional failures of our method lead us to discover duplicate images in the CIFAR100 and CelebA datasets
en
dc.language.iso
en
-
dc.publisher
IEEE COMPUTER SOC
-
dc.relation.ispartof
IEEE Transactions on Dependable and Secure Computing
-
dc.subject
Cryptography
en
dc.subject
Dishonest Server
en
dc.subject
Duplicates
en
dc.subject
Federated Learning
en
dc.subject
Membership Inference
en
dc.subject
Neural networks
en
dc.subject
Optimized production technology
en
dc.subject
Privacy
en
dc.subject
Privacy
en
dc.subject
ReLU
en
dc.subject
Servers
en
dc.subject
Training
en
dc.subject
Tutorials
en
dc.title
Perfectly Accurate Membership Inference by a Dishonest Central Server in Federated Learning