Implementation of Vision Transformer (ViT) in Pytorch. Presented in the paper, An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Vision Transformer achieve State-of-the-Art in image recognition task with standard Transformer encoder and fixed-size patches. In order to perform classification, author use the standard approach of adding an extra learnable "classification token" to the sequence.
Code was written using Pytorch , trained on CIFAR10 . ViT is a data hungry model , the more data there is the better the model will perform . Feel free to use any dataset you like