As Artificial Intelligence (AI) is getting a greater impact in our everyday life, it is of utmost importance that it is safe for humans. However, this thesis demonstrates that if only several pixels in an image are changed, neural networks produce incorrect results. The focus of this thesis is on white-box and black-box attacks against neural networks in the age estimation domain. Existing techniques are evaluated and a new semi-targeted approach is developed. Using the semi-targeted approach, adversarial samples can be generated. This new approach can be used whenever labels can be somehow ordered or even clustered. Although this thesis focuses on the age estimation domain, the ability to craft adversarial samples implies that neural networks should not have the final word in safety-critical applications.