Skip to content

klab-aizu/hybrid-snn-conversion-with-faults

Repository files navigation

Fault insertion into Deep Spiking Neural Networks

This code inserts faults into weights and thresholds.

To modify weight, use modify_snn_weight.py. The weight of SNN can be download here

To insert fault in SNN (VGG), check self_models/vgg_spiking.py, line 100-105

		self.frate = 0.01
		self.fault = {}
		self.ftype = 'stuck-at-zero' # 'stuck-at-one'
		self.finit = False
		self.f_threshold = {}

The accuracy for pretrained SNN can be seen in logs/snn/ folder.

alt text Caption: The accuracy vs error rate.







The following content is the orginial README file [https://github.com/nitin-rathi/hybrid-snn-conversion]

Enabling Deep Spiking Neural Networks with Hybrid Conversion and Spike Timing Dependent Backpropagation

This is the code related to the paper titled "Enabling Deep Spiking Neural Networks with Hybrid Conversion and Spike Timing Dependent Backpropagation" published in ICLR, 2020

Training Methodology

The training is performed in the following two steps:

  • Train an ANN ('ann.py')
  • Convert the ANN to SNN and perform spike-based backpropagation ('snn.py')

Files

  • 'ann.py' : Trains an ANN, the architecutre, dataset, training settings can be provided an input argument
  • 'snn.py' : Trains an SNN from scratch or performs ANN-SNN conversion if pretrained ANN is available.
  • /self_models : Contains the model files for both ANN and SNN
  • 'ann_script.py' and 'snn_script.py': These scripts can be used to design various experiments, it creates 'script.sh' which can be used to run multiple models

Trained ANN models

Trained SNN models

Issues

  • Sometimes the 'STDB' activation becomes unstable during training, leading to accuracy drop. The solution is to modulate the alpha and beta parameter or change the activation to 'Linear' in 'main.py'
  • Another reason for drop in accuracy could be the leak parameter. Please change 'leak_mem=1.0' in 'main.py'. This changes the leaky-integrate-and-fire (LIF) neuron to integrate-and-fire (IF) neuron.

Citation

If you use this code in your work, please cite the following paper

@inproceedings{
Rathi2020Enabling,
title={Enabling Deep Spiking Neural Networks with Hybrid Conversion and Spike Timing Dependent Backpropagation},
author={Nitin Rathi and Gopalakrishnan Srinivasan and Priyadarshini Panda and Kaushik Roy},
booktitle={International Conference on Learning Representations},
year={2020},
url={https://openreview.net/forum?id=B1xSperKvH}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages