New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory Error: Loads the entire dataset into Memory #184
Comments
I faced the same OOM issue. It's caused by ThreadPoolExecutor in
I tried to set max_workers to 1 too but that didn't work. Understanding where memory is being lost will require more ananlysis. |
Linking a related forum thread. The pool executor change doesn't seem to solve issue for me. 30GB RAM is used up in no time by Kaggle's google landmark dataset. EDIT: Maybe a slightly different issue since this one was reported to only occur after a few iterations.. |
Best off using the forum for this. |
making num workers =0 will execute the code that you are putting there is a if condition in a dataloader... |
…tation (fastai#184) * add classification report to ClassificationInterpretation" * merge upstream
The RAM is getting filled up after a few iterations, this causes a problem in handling large datasets. I am not sure but I think it is loading the entire data into Memory.
The text was updated successfully, but these errors were encountered: