Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase Performance #47

Open
ozanpkr opened this issue Dec 21, 2020 · 3 comments
Open

Increase Performance #47

ozanpkr opened this issue Dec 21, 2020 · 3 comments

Comments

@ozanpkr
Copy link

ozanpkr commented Dec 21, 2020

Hello @lucidrains ,
I use vit-transform for spesific data.Image size is 320x320 and number of classes equal to 2. I set parameters for my dataset and it reached %64.5 test accuracy.Have you any suggestion for parameters?Because I get average %83 test accuracy with other models.

efficient_transformer = Linformer( dim=256, seq_len=1024+1, # 7x7 patches + 1 cls-token depth=12, heads=8, k=64)
model = ViT( dim=256, image_size=320, patch_size=10, num_classes=2, transformer=efficient_transformer, channels=3, ).to(device)

@lucidrains
Copy link
Owner

lucidrains commented Dec 21, 2020

Try increasing your dimensions to 512

Also increase the k to 256 at very least

@ozanpkr
Copy link
Author

ozanpkr commented Dec 21, 2020

Try increasing your dimensions to 512

Also increase the k to 256 at very least

Thanks for your quick reply.

@ozanpkr
Copy link
Author

ozanpkr commented Dec 25, 2020

@lucidrains can you share usage of Distillation method in notebook example? I cannot use for this method.
distil

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants