Title: Angriffe gegen Neuronale Netzwerke
Other Titles: Attacks against neural networks
Language: English
Authors: Matak, Martin 
Qualification level: Diploma
Advisor: Weissenbacher, Georg 
Issue Date: 2019
Number of Pages: 53
Qualification level: Diploma
Abstract: 
As Artificial Intelligence (AI) is getting a greater impact in our everyday life, it is of utmost importance that it is safe for humans. However, this thesis demonstrates that if only several pixels in an image are changed, neural networks produce incorrect results. The focus of this thesis is on white-box and black-box attacks against neural networks in the age estimation domain. Existing techniques are evaluated and a new semi-targeted approach is developed. Using the semi-targeted approach, adversarial samples can be generated. This new approach can be used whenever labels can be somehow ordered or even clustered. Although this thesis focuses on the age estimation domain, the ability to craft adversarial samples implies that neural networks should not have the final word in safety-critical applications.
Keywords: Deep Learning; Adversarial Examples
URI: https://resolver.obvsg.at/urn:nbn:at:at-ubtuw:1-125221
http://hdl.handle.net/20.500.12708/13775
Library ID: AC15376302
Organisation: E192 - Institut für Logic and Computation 
Publication Type: Thesis
Hochschulschrift
Appears in Collections:Thesis

Files in this item:


Page view(s)

25
checked on Jul 21, 2021

Download(s)

43
checked on Jul 21, 2021

Google ScholarTM

Check


Items in reposiTUm are protected by copyright, with all rights reserved, unless otherwise indicated.