You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello @lucidrains ,
I use vit-transform for spesific data.Image size is 320x320 and number of classes equal to 2. I set parameters for my dataset and it reached %64.5 test accuracy.Have you any suggestion for parameters?Because I get average %83 test accuracy with other models.
Hello @lucidrains ,
I use vit-transform for spesific data.Image size is 320x320 and number of classes equal to 2. I set parameters for my dataset and it reached %64.5 test accuracy.Have you any suggestion for parameters?Because I get average %83 test accuracy with other models.
efficient_transformer = Linformer( dim=256, seq_len=1024+1, # 7x7 patches + 1 cls-token depth=12, heads=8, k=64)
model = ViT( dim=256, image_size=320, patch_size=10, num_classes=2, transformer=efficient_transformer, channels=3, ).to(device)
The text was updated successfully, but these errors were encountered: