We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
encode()
It would be great if there is an option to call model.encode("squareslab") with GPU instead of CPU.
model.encode("squareslab")
If we call model.to(torch.device("cuda")), it doesn't work because return_dict is on CPU:
model.to(torch.device("cuda"))
return_dict
VarCLR/varclr/models/encoders.py
Line 121 in 2fe9595
The text was updated successfully, but these errors were encountered:
Hi @victorcwai ,
You can add
return_dict = dict( input_ids=return_dict["input_ids"].to("cuda") attention_mask=return_dict["attention_mask"].to("cuda") )
Sorry, something went wrong.
Yes thank you. But in my opinion, it would also be great if we are able to choose whether to use CPU and GPU in the encode() API. :-)
No branches or pull requests
It would be great if there is an option to call
model.encode("squareslab")
with GPU instead of CPU.If we call
model.to(torch.device("cuda"))
, it doesn't work becausereturn_dict
is on CPU:VarCLR/varclr/models/encoders.py
Line 121 in 2fe9595
The text was updated successfully, but these errors were encountered: