[Intel GPU] enable use of dinov2 models for offload benchmark_low_bit_adam#3191
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3191
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 24d094a with merge base b9e5780 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
To add the ciflow label This helps ensure we don't trigger CI on this PR until it is actually authorized to do so. Please ping one of the reviewers if you do not have access to approve and run workflows. |
|
@pytorchbot retest |
|
❌ 🤖 pytorchbot command failed: Try |
8a44016 to
d5312a9
Compare
|
@arlesniak this is the benchmark part, you also need to enable the realted UTs |
The PR changes are solely for the purpose of the benchmark when dinov2 models are used with it. The root cause of the error was the hardcoded input size, fixed with more flexible solution.These changes don't affect the TorchAo functionality and IMHO are not to be unit tested. |
|
@liangan1 , @arlesniak , is the issue a XPU-specific issue? According to the error message, it should be a general issue, rigth? |
|
@EikanWang It's a general issue, the fix is needed for the benchmark to be run with models on IntelGPU and others. (https://github.com/pytorch/ao/blob/main/torchao/optim/README.md) |
|
|
||
| def get_dloader(args, training: bool): | ||
| transforms = [v2.ToImage()] | ||
| input_size = ( |
There was a problem hiding this comment.
can you add a example benchmark command for xpu in the start of the file as a comment?
09612c9 to
24d094a
Compare
Fix for the assertion:
AssertionError: Input height (224) doesn't match model (518).when running benchmark_low_bit_adam with i.e. timm/vit_giant_patch14_dinov2.lvd142m (results for the offload benchmarks)