Awesome
Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural Networks with Neuromorphic Data
Corresponding code for the paper: "Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural Networks with Neuromorphic Data", at Network and Distributed System Security (NDSS) 2024.
A guide to the code is available here.
Examples
Static Triggers
Moving Triggers
Smart Triggers
Clean Image
Trigger in the least important area
Trigger in the most important area
Dynamic Triggers
Attack Overview
Dynamic Examples
γ | 0.1 | 0.05 | 0.01 |
---|---|---|---|
Clean image | |||
Noise | |||
Projected Noise | |||
Backdoor image |
Authors
Gorka Abad, Oguzhan Ersoy, Stjepan Picek, and Aitor Urbieta.
How to cite
@inproceedings{abad2024sneaky,
title={Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural Networks with Neuromorphic Data.},
author={Abad, Gorka and Ersoy, Oguzhan and Picek, Stjepan and Urbieta, Aitor.},
booktitle={NDSS},
year={2024}
}
License
This project is licensed under the MIT License - see the LICENSE file for details.