AxBy-ViT: Reconfigurable Approximate Computation Bypass for Vision Transformers
- pytorch
- pytorch-pretrained-vit
- bitstring
- PIL
- numpy
- Install required packages.
- Prepare ImageNet dataset and put the inference set at ./val_imagenet. Each class must have its individual directories matching the synset names (as indicated in the python script)
- Initialize your GPU. By default GPU 0 is used.
- Configure approximate bypass settings by changing the argument of axby_config()
- Refraction.
- The current code repo is actually from a Jupyter notebook therefore requires quite refraction to make it more accessible.
- Directly manipulating on original transformer object rather than defining custom classes.
- Currently using new classes inherited from the original transformer class for legacy designing issues however this can be avoided since pytorch already provides interface.