We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Load model directly on GPU when available instead of 1) CPU 2) GPU
Trying to use comet-score with cometkiwi-xl on Colab
Currently, the load_checkpoint method forces to load on torch.device("cpu"). On Colab Free there is only 12GB of cpu RAM, hence XL does not fit.
Then I switched in init.py torch.device() to "cuda" Now it loads the model on GPU fine
BUT just before starting to score, the cpu RAM suddenly jumps to > 12GB, not sure to understand why.
Any clue ?
The text was updated successfully, but these errors were encountered:
usually, the way it should work is:
Sorry, something went wrong.
No branches or pull requests
🚀 Feature
Load model directly on GPU when available instead of 1) CPU 2) GPU
Motivation
Trying to use comet-score with cometkiwi-xl on Colab
Currently, the load_checkpoint method forces to load on torch.device("cpu").
On Colab Free there is only 12GB of cpu RAM, hence XL does not fit.
Then I switched in init.py torch.device() to "cuda"
Now it loads the model on GPU fine
BUT just before starting to score, the cpu RAM suddenly jumps to > 12GB, not sure to understand why.
Any clue ?
The text was updated successfully, but these errors were encountered: