Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't train with GPU on Windows10 #6

Closed
FeederDiver opened this issue Dec 2, 2017 · 1 comment
Closed

Can't train with GPU on Windows10 #6

FeederDiver opened this issue Dec 2, 2017 · 1 comment

Comments

@FeederDiver
Copy link

I've used YOLO detection with trained model using my GPU - Nvidia 1060 3Gb, and everything worked fine.

Now I am trying to generate my own model, with param --gpu 1.0. Tensorflow can see my gpu, as I can read at start those communicates:
"name: GeForce GTX 1060 major: 6 minor: 1 memoryClockRate(GHz): 1.6705"
"totalMemory: 3.00GiB freeMemory: 2.43GiB"

Anyway, later on, when program loads data, and is trying to start learning i got following error:
"failed to allocate 832.51M (872952320 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY"

I've checked if it tries to use my other gpu (Intel 630) , but it doesn't.

As i run the train process without "--gpu" mode option, it works fine, but slowly.
( I've tried also --gpu 0.8, 0.4 etc.)

Any idea how to fix it?

@FeederDiver
Copy link
Author

Problem solved. Changing values of batch size and image size / subdivisions and other in cfg file didn't work, as they somehow loaded incorrectly. I went to defaults.py file and changed them up there, so my GPU is now capable of processing it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant