Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add .to() to OVBaseModel #284

Merged
merged 3 commits into from
Apr 17, 2023
Merged

Conversation

helena-intel
Copy link
Collaborator

Fixes .to() not working for OVModelForCausalLM

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Apr 14, 2023

The documentation is not available anymore as the PR was closed or merged.

@Vipitis
Copy link

Vipitis commented Apr 14, 2023

Hey,
I don't think this sufficiently fixes #279 as OVBaseCoderModel constructs a model.decoder OVDecoder with _devive. And .to() does only change model._device not model.decoder._device which is used for .compile() in OVBaseDecoderModel

happened while I was typing...

@helena-intel
Copy link
Collaborator Author

The simple implementation of moving to OVBaseModel silently failed. After .to("GPU") inference did not occur on GPU but still on CPU, because the decoder model was not moved to the new device. For now I moved the existing implementation back to OVModel and added .to() to OVBaseDecoderModel and explicitly set the correct device to the decoder model there.

@helena-intel
Copy link
Collaborator Author

@Vipitis Thanks! We wrote at the same time. I noticed it too and it should be fixed now.

@helena-intel
Copy link
Collaborator Author

helena-intel commented Apr 14, 2023

@echarlaix the test fails with PyTorch 4.28 (unrelated to this PR). I restricted PyTorch to <4.28 on my fork and the tests pass there: https://github.com/helena-intel/optimum-intel/actions/runs/4699850769/jobs/8333817513

Copy link
Collaborator

@echarlaix echarlaix left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for fixing it @helena-intel

optimum/intel/openvino/modeling_decoder.py Outdated Show resolved Hide resolved
model.to("TEST")
self.assertEqual(model._device, model.decoder._device)
self.assertEqual(model.decoder._device, "TEST")

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
self.assertEqual(model.decoder.request, None)

@echarlaix
Copy link
Collaborator

@echarlaix the test fails with PyTorch 4.28 (unrelated to this PR). I restricted PyTorch to <4.28 on my fork and the tests pass there: https://github.com/helena-intel/optimum-intel/actions/runs/4699850769/jobs/8333817513

Yes you're correct it's unrelated to this PR, I fixed it in #285

helena-intel and others added 3 commits April 17, 2023 11:21
Fixes `.to()` not working for OVModelForCausalLM
Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants