Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Call encode() with GPU #2

Closed
victorcwai opened this issue May 25, 2022 · 2 comments
Closed

Call encode() with GPU #2

victorcwai opened this issue May 25, 2022 · 2 comments

Comments

@victorcwai
Copy link

It would be great if there is an option to call model.encode("squareslab") with GPU instead of CPU.

If we call model.to(torch.device("cuda")), it doesn't work because return_dict is on CPU:

self, return_dict["input_ids"], return_dict["attention_mask"]

@qibinc
Copy link
Collaborator

qibinc commented May 25, 2022

Hi @victorcwai ,

You can add

return_dict = dict(
    input_ids=return_dict["input_ids"].to("cuda")
    attention_mask=return_dict["attention_mask"].to("cuda")
)

@victorcwai
Copy link
Author

Yes thank you.
But in my opinion, it would also be great if we are able to choose whether to use CPU and GPU in the encode() API. :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants