<div class="csl-bib-body">
<div class="csl-entry">Marchisio, A., Nanfa, G., Khalid, F., Hanif, M. A., Martina, M., & Shafique, M. (2023). SeVuc: A study on the Security Vulnerabilities of Capsule Networks against adversarial attacks. <i>Microprocessors and Microsystems</i>, <i>96</i>, Article 104738. https://doi.org/10.1016/j.micpro.2022.104738</div>
</div>
-
dc.identifier.issn
0141-9331
-
dc.identifier.uri
http://hdl.handle.net/20.500.12708/142506
-
dc.description.abstract
Capsule Networks (CapsNets) preserve the hierarchical spatial relationships between objects, and thereby bear the potential to surpass the performance of traditional Convolutional Neural Networks (CNNs) in performing tasks like image classification. This makes CapsNets suitable for the smart cyber–physical systems (CPS), where a large amount of training data may not be available. A large body of work has explored adversarial examples for CNNs, but their effectiveness on CapsNets has not yet been studied systematically. In our work, we perform an analysis to study the vulnerabilities in CapsNets to adversarial attacks. These perturbations, added to the test inputs, are small and imperceptible to humans, but can fool the network to mispredict. We propose a greedy algorithm to automatically generate imperceptible adversarial examples in a black-box attack scenario. We show that this kind of attacks, when applied to the German Traffic Sign Recognition Benchmark and CIFAR10 datasets, mislead CapsNets in making a correct classification, which can be catastrophic for smart CPS, like autonomous vehicles. Moreover, we apply the same kind of adversarial attacks to a 5-layer CNN (LeNet), to a 9-layer CNN (VGGNet), and to a 20-layer CNN (ResNet), and analyze the outcome, compared to the CapsNets, to study their different behaviors under the adversarial attacks.
en
dc.language.iso
en
-
dc.publisher
ELSEVIER
-
dc.relation.ispartof
Microprocessors and Microsystems
-
dc.subject
Adversarial attacks
en
dc.subject
Affine transformations
en
dc.subject
Architecture
en
dc.subject
Artificial intelligence
en
dc.subject
Capsule Networks
en
dc.subject
Convolutional neural networks
en
dc.subject
Deep learning
en
dc.subject
Deep neural networks
en
dc.subject
Machine learning
en
dc.subject
Robustness
en
dc.subject
Security
en
dc.subject
Vulnerability
en
dc.title
SeVuc: A study on the Security Vulnerabilities of Capsule Networks against adversarial attacks