Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The larger tile shape results lower speed in finetuning. #73

Open
dean09131 opened this issue Sep 15, 2020 · 1 comment
Open

The larger tile shape results lower speed in finetuning. #73

dean09131 opened this issue Sep 15, 2020 · 1 comment

Comments

@dean09131
Copy link

Hello, my GPU is Tesla V100-32G, when I use 508x508 tile shape as you did in the tutorial video, the speed is somewhat well, but when I use 1500x1500 tile shape, the estimated memory is about 18G, less than my GPU's limit, but the speed is quite slow. I'm not familar with caffe, so I thought larger tile shape is good for accelerating the finetuning, is it right?

@ThorstenFalk
Copy link
Collaborator

Factor 10 slower is expected with factor 10 larger input, everything above is overhead from data augmentation and transfer. The number of iterations may be affected by input shape but I would not in general say the bigger the better. I usually train with relatively small tiles and batch size one to increase randomness. Curves become wiggly but output is quite robust.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants