Title: Angriffe gegen Neuronale Netzwerke
Language: English
Authors: Matak, Martin 
Qualification level: Diploma
Keywords: Deep Learning; Adversarial Examples
Advisor: Weissenbacher, Georg 
Issue Date: 2019
Number of Pages: 53
Qualification level: Diploma
Abstract: 
As Artificial Intelligence (AI) is getting a greater impact in our everyday life, it is of utmost importance that it is safe for humans. However, this thesis demonstrates that if only several pixels in an image are changed, neural networks produce incorrect results. The focus of this thesis is on white-box and black-box attacks against neural networks in the age estimation domain. Existing techniques are evaluated and a new semi-targeted approach is developed. Using the semi-targeted approach, adversarial samples can be generated. This new approach can be used whenever labels can be somehow ordered or even clustered. Although this thesis focuses on the age estimation domain, the ability to craft adversarial samples implies that neural networks should not have the final word in safety-critical applications.
URI: https://resolver.obvsg.at/urn:nbn:at:at-ubtuw:1-125221
http://hdl.handle.net/20.500.12708/13775
Library ID: AC15376302
Organisation: E192 - Institut für Logic and Computation 
Publication Type: Thesis
Hochschulschrift
Appears in Collections:Thesis

Files in this item:

File Description SizeFormat
Angriffe gegen Neuronale Netzwerke.pdf5.19 MBAdobe PDFThumbnail
 View/Open
Show full item record

Page view(s)

22
checked on Feb 24, 2021

Download(s)

33
checked on Feb 24, 2021

Google ScholarTM

Check


Items in reposiTUm are protected by copyright, with all rights reserved, unless otherwise indicated.