New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FIX] offload_weight()
takes from 3 to 4 positional arguments but 5 were given
#29457
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hehe great catch!
… were given (huggingface#29457) * use require_torch_gpu * enable on XPU * fix
- Bug introduced by `transformers 4.38.0`, fixed a week ago and will appear in >4.38.2 (huggingface/transformers#29457) - By restricting `transformers<4.38.0` in `requirements.txt`
… were given (huggingface#29457) * use require_torch_gpu * enable on XPU * fix
… were given (huggingface#29457) * use require_torch_gpu * enable on XPU * fix
What does this PR do?
2 tests fail with the following messages:
Below is the Traceback:
I remove the additional
model
argument and also changerequire_torch_accelerator
torequire_torch_gpu
. Because0
impliedcuda:0
and on non-nv-gpu devices, I would getAssertionError: Torch not compiled with CUDA enabled
.@younesbelkada @SunMarc