Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added automatic mixed precision (AMP) support for reader training from Haystack side #463

Merged
merged 2 commits into from
Oct 12, 2020

Conversation

antoniolanza1996
Copy link
Contributor

Resolves #462.

@antoniolanza1996
Copy link
Contributor Author

I have added all the stuff to enable AMP from Haystack side.
However, there are still problems from FARM side, as you can see from this error message:

RuntimeError: Found param language_model.model.embeddings.word_embeddings.weight with type torch.FloatTensor, expected torch.cuda.FloatTensor.
When using amp.initialize, you need to provide a model with parameters
located on a CUDA device before passing it no matter what optimization level
you chose. Use model.to('cuda') to use the default device.

@tholor , can you fix it?

@tholor tholor self-assigned this Oct 6, 2020
@tholor
Copy link
Member

tholor commented Oct 12, 2020

@antoniolanza1996 I finally had a chance to test this. However, I could not replicate the above issue. For me it seems to run correctly when executing:

reader = FARMReader(model_name_or_path="distilbert-base-uncased-distilled-squad", use_gpu=True)
reader.train(data_dir= "data/squad20", train_filename="dev-v2.0.json", use_gpu=True, use_amp="O1", n_epochs=1, save_dir="my_model")

Did you maybe load the FARMReader without "use_gpu=True"?

@antoniolanza1996
Copy link
Contributor Author

@tholor you are right. I have re-run and now it works well.
Probably last time there was something that I didn't appropriately set, sorry for that.
Hence, I think you can merge this PR, isn't it?

Copy link
Member

@tholor tholor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, we can merge then

@tholor tholor merged commit 3caaf99 into deepset-ai:master Oct 12, 2020
@antoniolanza1996 antoniolanza1996 deleted the add_amp_support branch October 12, 2020 21:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Use AMP training with FARMReader
2 participants