You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How can I share memory across my processes in ddp? I'm getting OOM errors with 2 gpus and a 6gb dataset. My script would also load faster if it wasn't pickling the dataset and copying to other processes.
The text was updated successfully, but these errors were encountered:
@williamFalcon Would you please explain how #2029 resolves this issue?
It would be nice if you can give some instructions on how to store datasets in the shared memory in pytorch-lighting, thanks!
@JiamingSuen I have the same idea like yours. So did you make it now? Can you help me about this?
I end up splitting the large in-memory dataset into different splits and letting each worker loads its own split.
You may take a look at this code for further reference.
@JiamingSuen But this is very different from the single-device version in the shuffle process. The shuffle process is more complete in single-device, isn't it?
How can I share memory across my processes in ddp? I'm getting OOM errors with 2 gpus and a 6gb dataset. My script would also load faster if it wasn't pickling the dataset and copying to other processes.
The text was updated successfully, but these errors were encountered: