Machine learning nowadays plays an important role in decision making in various industries, especially with the rapid development of technologies and data gathered in mobile computing systems, cloud computing, and the Internet of Things. However, the data used for training machine learning models usually has private nature. With increasing concerns of data privacy, the issue of making use of the data while preserving users privacy is getting more critical. Federated learning is an approach which allows using machine learning on distributed data while the data owner can keep the data on his/her own side. The main idea of federated learning is to train machine learning models locally at each data owners place and to aggregate only the model from each participant of the training to a globally shared model. This greatly reduces the amount of information needed to be exchanged, and thus reduces the attack surface for an adversary. However, models that are exchanged during the federated learning process can still leak information about their training data. In this work, we evaluate privacy risks in federated learning by performing privacy attacks, e.g. membership inference attacks, in different federated learning settings. We design empirical evaluation of the success of attacks and mitigation strategies, aiming for a trade-off between privacy and effectiveness of the models.
Abweichender Titel nach Übersetzung der Verfasserin/des Verfassers