Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bugfix: cross-validation with few memory #2

Merged
merged 1 commit into from
Jul 20, 2016

Conversation

csapot
Copy link
Contributor

@csapot csapot commented Jul 19, 2016

the "cross-validate" run of train_tool did not use the $cache_size parameter, so it went to a memory allocation error
(using 8 GB RAM)

the "cross-validate" run of train_tool did not use the $cache_size parameter, so it went to a memory allocation error
(using 8 GB RAM)
@bpotard
Copy link
Owner

bpotard commented Jul 20, 2016

Well spotted! However I think if you run out of memory in the cross validation, it must be because you are using a GPU with a very limited amount of memory. It is the on-board GPU memory which is the limiting factor in CUDA mode, note your "main" RAM. With that recipe, the cross-validation should really take no more than a few hundred MB. If you do not have a decent GPU, you may be better off using the CPU rather than the GPU.

@bpotard bpotard merged commit a5beb97 into bpotard:import-svn-idlak Jul 20, 2016
@csapot
Copy link
Contributor Author

csapot commented Jul 20, 2016

You are right, I only have a 1 GB GPU currently.
I also tried running on CPU on a different computer, and it was working
fine without memory errors (but much slower).

On 2016.07.20. 11:11, Blaise Potard wrote:

Well spotted! However I think if you run out of memory in the cross
validation, it must be because you are using a GPU with a very limited
amount of memory. It is the on-board GPU memory which is the limiting
factor in CUDA mode, note your "main" RAM. With that recipe, the
cross-validation should really take no more than a few hundred MB. If
you do not have a decent GPU, you may be better off using the CPU
rather than the GPU.


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
#2 (comment), or
mute the thread
https://github.com/notifications/unsubscribe-auth/AF57Tf1D7U9c3vhlIlote78K0rZqdFE3ks5qXeZJgaJpZM4JQOna.

@bpotard
Copy link
Owner

bpotard commented Jul 20, 2016

Ok, you are better off using the GPU then. It should be easier now with your fix.
1GB should be plenty though, I am a bit surprised the cross-validation would take that much. Maybe you have other processes using the GPU, i.e. you can try to run nvidia-smi to see if there is another process clogging your GPU memory.

@bpotard
Copy link
Owner

bpotard commented Jul 20, 2016

Ah no, you are right, with the default settings it takes close to 3GB of GPU memory.

@csapot
Copy link
Contributor Author

csapot commented Jul 20, 2016

No, I don't have anything else, even the X server is disabled. After
some trials I found that a "cache-size" of 10000 works fine.

On 2016.07.20. 12:08, Blaise Potard wrote:

Ah no, you are right, with the default settings it takes close to 3GB
of GPU memory.


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
#2 (comment), or
mute the thread
https://github.com/notifications/unsubscribe-auth/AF57TdFjPIaJVbckr_pzL3n6nycnagC_ks5qXfOegaJpZM4JQOna.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants