Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU size required to train ENet #10

Closed
ChidanandKumarKS opened this issue Jul 2, 2017 · 2 comments
Closed

GPU size required to train ENet #10

ChidanandKumarKS opened this issue Jul 2, 2017 · 2 comments

Comments

@ChidanandKumarKS
Copy link

Iam training with NVIDIA Geforce GTX titanZ.
While training, i got stuck with check failed : cudaSuccess(2 Vs 0) out of memory.

Kindly suggest what is the size of GPU needed to train ENet?

Regards
Kumar

@TimoSaemann
Copy link
Owner

It depends on the image size of the training data. Reduce the batch size and the size of your training data until it fits into the GPU memory.

@ewen1024
Copy link

By default, it requires 12 gb for batch 3 and the original image size.
This setting doesn't fit into gtx 1080 ti 11gb gpu

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants