Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GH-1079: Increase CPU training speed by pinning tensors #1082

Merged
merged 2 commits into from Sep 10, 2019

Conversation

alanakbik
Copy link
Collaborator

This PR makes minor modifications to the .to() method of the DataPoint base class and all implementing classes, namely adding the option of moving a data point tensor to pinned memory. Pinning a tensor is a one-time cost but then allows all subsequents GPU tensor copy operations to work faster. When training a model with embeddings_storage_mode = 'cpu', we at each epoch move tensors from CPU to GPU, so this PR increases overall training speed (closes #1079).

This PR also adds a check if the .to() operation is even necessary (not needed if a tensor is already on the relevant device), leading to a small increase in training speed.

@kashif
Copy link
Contributor

kashif commented Sep 10, 2019

👍

1 similar comment
@alanakbik
Copy link
Collaborator Author

👍

@alanakbik alanakbik merged commit 3c93339 into master Sep 10, 2019
@alanakbik alanakbik deleted the GH-1079-pinned-tensors branch September 11, 2019 11:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Keep word embedding tensors on pinned cpu memory during training
2 participants