This repo contains the codes for our preliminary study [paper][poster][presentation] which aims at improving bit-slice sparsity for efficient ReRAM deployment of DNN. Codes are tested with Pytorch 1.2.0 and Python 3.7.
The codes for MNIST and CIFAR-10 are within
cifar/ respectively. The training routine mainly consists of three parts: pre-training, pruning, and fine-tuning.
First, pre-train a fixed-point model:
Then, load and prune the pre-trained model, and fine-tune with either normal l1 regularization, or bit-slice l1 regularization.
python finetune_l1.py or python finetune_bitslice.py
There are some arguments within the codes for which we have set up default values, but you may want to check it yourself and make some adjustments.
The codes are adapted from nics_fix_pytorch.