Skip to content

nachiket273/VisTrans

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

86 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

VisTrans

Implementations of transformers based models for different vision tasks

Install

  1. Install from PyPI
pip install vistrans
  1. Install from Anaconda
conda install -c nachiket273 vistrans

Version 0.003 (06/30/2021)


PyPI version

Minor fixes to fix issues with existing models.

Version 0.002 (04/17/2021)


PyPI version

Pretrained Pytorch Bottleneck Transformers for Visual Recognition including following

  • botnet50
  • botnet101
  • botnet152

Implementation based off Official Tensorflow Implementation

Usage


pip install vistrans

1) List Pretrained Models.
```Python
from vistrans import BotNet
BotNet.list_pretrained()
  1. Create Pretrained Models.
from vistrans import BotNet
model = BotNet.create_pretrained(name, img_size, in_ch, num_classes,
                                 n_heads, pos_enc_type)
  1. Create Custom Model
from vistrans import BotNet
model = BotNet.create_model(layers, img_size, in_ch, num_classes, groups,
                            norm_layer, n_heads, pos_enc_type)

Version 0.001 (03/04/2021)


PyPI version

Pretrained Pytorch Vision Transformer Models including following

  • vit_s16_224
  • vit_b16_224
  • vit_b16_384
  • vit_b32_384
  • vit_l16_224
  • vit_l16_384
  • vit_l32_384

Implementation based off official jax repository and timm's implementation

Usage


  1. List Pretrained Models.
from vistrans import VisionTransformer
VisionTransformer.list_pretrained()
  1. Create Pretrained Models.
from vistrans import VisionTransformer
model = VisionTransformer.create_pretrained(name, img_size, in_ch, num_classes)
  1. Create Custom Model
from vistrans import VisionTransformer
model = VisionTransformer.create_model(img_size, patch_size, in_ch, num_classes,
                                       embed_dim, depth, num_heads, mlp_ratio,
                                       drop_rate, attention_drop_rate, hybrid,
                                       norm_layer, bias)