We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Great Tutorial! It would be very cool, if you can describe how to use the GPU to run it faster.
The text was updated successfully, but these errors were encountered:
It's fairly easy to move the model to the GPU. Try the following: Get the device you have:
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
Instantiate your model:
model = LstmClassifier(word_embeddings, encoder, vocab)
Move the model to the GPU:
model = model.to(device)
Additionally the Allennlp Trainer has an option to pass the cuda_device. So you would have something like the following:
Trainer
cuda_device
trainer = Trainer(model=model, optimizer=optimizer, iterator=iterator, train_dataset=train_dataset, validation_dataset=dev_dataset, patience=10, num_epochs=20, cuda_device=device.index)
That's it!
Sorry, something went wrong.
Thanks a lot! 👍
No branches or pull requests
Great Tutorial!
It would be very cool, if you can describe how to use the GPU to run it faster.
The text was updated successfully, but these errors were encountered: