Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How much memory needed for training? #1

Closed
hwd8868 opened this issue Jul 18, 2017 · 5 comments
Closed

How much memory needed for training? #1

hwd8868 opened this issue Jul 18, 2017 · 5 comments

Comments

@hwd8868
Copy link

hwd8868 commented Jul 18, 2017

I used tensorflow-CPU for training dataset with 48G memory. But I get memory fault after training for several hours :
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc

@yukitsuji
Copy link
Owner

I train by GPU with 6G memory.
what is your batchsize for training?

@hwd8868
Copy link
Author

hwd8868 commented Jul 18, 2017

Batch size is 5, the default value.
Now I decrease batch size to 2, also get the same error.
I just run it first, didn't modify your code,

@yukitsuji
Copy link
Owner

I don't know how to resolve this issue.
Please refer other pages about this.

And probably this could help you.
tensorflow/tensorflow#9487

@hwd8868 hwd8868 closed this as completed Jul 18, 2017
@hwd8868
Copy link
Author

hwd8868 commented Jul 18, 2017

Ok, thank you. I just found 48G memory was fully used.

@hwd8868 hwd8868 reopened this Jul 18, 2017
@codexxxl
Copy link

Hi hwd8868!

I also run into the same error as you did. I was using CPU with memory around 20 GB, but my maximum usage was only reached to about 45%. I wonder how did you solve your issue, and if you possibly have any suggestions?

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants