add pinned memory support for int8tensor#3489
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3489
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New FailureAs of commit 5bb5818 with merge base ff6d9e2 ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
ab9f34d to
4133727
Compare
There was a problem hiding this comment.
looks great, thanks!
cc @sayakpaul please take a look as well and let us know if the current test is enough to cover the functionality of pin_memory
|
for the |
4133727 to
5bb5818
Compare
sayakpaul
left a comment
There was a problem hiding this comment.
This is perfect! Thanks!
I guess float already supports this?
|
Float8Tensor doesn't support this yet, I think we should follow up with that as well @liangel-02 |
|
Will check and update our tests. Please give me a headsup once this is up for FP8. Cc: @asomoza |
as title
Test
in torchao
python test/quantization/quantize_/workflows/int8/test_int8_tensor.py -k test_pin_memoryin diffusers
python -m pytest tests/quantization/torchao/test_torchao.py -k test_torch_compile_with_group_offload_leaf -sno longer seeing
however, still seeing the known Dynamo error