Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.
Sign upReduce the memory footprint of the training script and few other bug fixes #231
Conversation
|
Looks great ! |
This comment has been minimized.
This comment has been minimized.
|
@svlandeg done |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Abhijit-2592 commentedDec 24, 2019
•
edited
Fixes #193
This Pull request addresses the following 3 problems:
.npyfiles to memory. Thus I have provided an option to load them lazily so that RAM usage does not blow up. This option can be turned on by passing a--lazyflag tolearn.py. Before this I was unable to train on the dataset (approx 8.5GB size in disk) generated using spacy'sen_core_web_lgmodel on my laptop (16GB RAM and 6GB GPU). The dataset required more than 50GB of RAM to train. After this change the memory foot print is ~5GBfor the same dataset and runs on my laptop without hiccups.spacy model.