Skip to content
This repository has been archived by the owner on Aug 18, 2021. It is now read-only.

volatile=True during generation #28

Open
kylemcdonald opened this issue May 24, 2017 · 1 comment
Open

volatile=True during generation #28

kylemcdonald opened this issue May 24, 2017 · 1 comment

Comments

@kylemcdonald
Copy link

I noticed that I was getting out of memory errors when I tried to generate long sequences using the GPU. I posted about this on the forum https://discuss.pytorch.org/t/optimizing-cuda-memory-pipeline-for-rnn/3311/5 and learned that if you create volatile=True variables during generation then you can generate sequences that are indefinitely long.

@spro
Copy link
Owner

spro commented Jun 2, 2017

Good idea thanks, I'll add this in the next round of updates.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants