New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add 0dim Tensor overload for _foreach_div #113688
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/113688
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit ed39357 with merge base afef32b (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
ghstack-source-id: 165022131c433e1bba4968541baca300ce1f30c4 Pull Request resolved: #113688
[ghstack-poisoned]
ghstack-source-id: c32ca37f3aa067d01fa00c61ae1635a2af1ca80b Pull Request resolved: #113688
@@ -48,7 +48,7 @@ def __init__(self, func): | |||
# Some foreach functions don't have in-place implementations. | |||
self.is_inplace = False if func is None else func.__name__.endswith('_') | |||
|
|||
def __call__(self, inputs, is_cuda, is_fastpath, **kwargs): | |||
def __call__(self, inputs, is_cuda, expect_fastpath, **kwargs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rename to expect_fastpath, which is what it really is and is less confusing
cc crcrpar [ghstack-poisoned]
ghstack-source-id: f4da19abceb10d38fbf2545a54492cf7080c522f Pull Request resolved: #113688
cc crcrpar [ghstack-poisoned]
ghstack-source-id: 2dd88dd23df016b30f8b4f535c4a06931d245de4 Pull Request resolved: #113688
This PR is ALMOST basically just following the steps from #106677 EXCEPT! We do add one feature. Similar to fused_adam(w), for the CUDA dispatches: when the scalar tensor is on CPU, we .item and redispatch to the normal scalar overload. Otherwise, the cuda kernel will complain about mismatch in devices between the scalar and the tensors. Why do we add this feature? Our optimizers want to allow lr as a tensor, and lr could be a CPU tensor. lr is used with foreach_div_ in Adam, so our CI will break otherwise. After this PR, `_foreach_mul` and `_foreach_div` will accept either a CPU or a GPU tensor for the scalar tensor (vs only a GPU tensor). They join the ranks of `fused_adam(w)` in this characteristic. I did not yet do the same thing for foreach_add (the only other foreach op with a .Tensor overload) because there is no use case and will be more involved. cc crcrpar [ghstack-poisoned]
ghstack-source-id: d7ffa5d000120e7a7eacf3f3bdc3513510c5f8b3 Pull Request resolved: #113688
@onlyCUDA | ||
def test_0dim_tensor_overload_exception(self): | ||
# check exceptions of fast path | ||
tensors = [make_tensor((2, 2), dtype=torch.float, device="cuda") for _ in range(2)] | ||
with self.assertRaisesRegex(RuntimeError, "scalar tensor expected to be on"): | ||
torch._foreach_mul(tensors, torch.tensor(1.0, device="cpu")) | ||
torch._foreach_add(tensors, torch.tensor(1.0, device="cpu"), alpha=1.0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This works now, but made me realize I didn't add a case for _foreach_add when I added the overload. Adding that now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please confirm there is a test case that also ensures that a list of cpu Tensor and a list of cuda Tensor properly raises.
test_parity tests that both a forloop over the ops + the foreach op will return the same thing or error the same: Lines 163 to 175 in 42b2b9e
Note that if the reference code errors in line 174, the test will fail. |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
@janeyx99 hmm at what level will this error, I probably need to add the grouping at the dynamo level if this occurs when FakeTensor tracing as we discussed |
Ah, what happens is that we'll dispatch into the CUDA impl |
Nah this is good, this is exactly what I want. I also will only apply the special handling we discussed when there is a single tensor in the second arg. |
This PR is ALMOST basically just following the steps from #106677 EXCEPT! We do add one feature. Similar to fused_adam(w), for the CUDA dispatches: when the scalar tensor is on CPU, we .item and redispatch to the normal scalar overload. Otherwise, the cuda kernel will complain about mismatch in devices between the scalar and the tensors.
Why do we add this feature? Our optimizers want to allow lr as a tensor, and lr could be a CPU tensor. lr is used with foreach_div_ in Adam, so our CI will break otherwise.
After this PR,
_foreach_mul
and_foreach_div
will accept either a CPU or a GPU tensor for the scalar tensor (vs only a GPU tensor). They join the ranks offused_adam(w)
in this characteristic. I did not yet do the same thing for foreach_add (the only other foreach op with a .Tensor overload) because there is no use case and will be more involved.cc @crcrpar
Stack from ghstack (oldest at bottom):