-
Notifications
You must be signed in to change notification settings - Fork 25.5k
[STABLE ABI] Add STABLE_DISPATCH_... CPP macros #163973
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: gh/pearu/118/base
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/163973
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 9fc7b16 with merge base 84d673e ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
torch/csrc/stable/Dispatch.h
Outdated
return #name; | ||
|
||
switch (t) { | ||
AT_FORALL_SCALAR_TYPES_WITH_COMPLEX_AND_QINTS(DEFINE_CASE) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where is AT_FORALL_SCALAR_TYPES_WITH_COMPLEX_AND_QINTS defined? Isn't it not stable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This macro is defined in torch/headeronly/core/ScalarType.h that is included by a number of torch/csrc/stable/ files. Does this inclusion count as being in stable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we cannot use AT_FORALL_SCALAR_TYPES_WITH_COMPLEX_AND_QINTS
as it contains dtypes that are unsupported in stable. For example, calling libtorch_agnostic.ops.my_empty_like
on a qint8 tensor fails with
[E927 21:27:46.276315525 shim_common.cpp:1666] Exception in aoti_torch: false INTERNAL ASSERT FAILED at "/home/pearu/git/pytorch/pytorch-linear/aten/src/ATen/quantized/Quantizer.cpp":441, please report a bug to PyTorch. cannot call qscheme on UnknownQuantizer
Exception raised from qscheme at /home/pearu/git/pytorch/pytorch-linear/aten/src/ATen/quantized/Quantizer.cpp:441 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x91 (0x7f9da216bfe1 in /home/pearu/git/pytorch/pytorch-linear/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x7f (0x7f9da20ed082 in /home/pearu/git/pytorch/pytorch-linear/torch/lib/libc10.so)
frame #2: c10::detail::torchInternalAssertFail(char const*, char const*, unsigned int, char const*, char const*) + 0x63 (0x7f9da2168353 in /home/pearu/git/pytorch/pytorch-linear/torch/lib/libc10.so)
frame #3: <unknown function> + 0x318ceeb (0x7f9db0821eeb in /home/pearu/git/pytorch/pytorch-linear/torch/lib/libtorch_cpu.so)
frame #4: at::native::qscheme_quant(at::Tensor const&) + 0x37 (0x7f9daf5238e7 in /home/pearu/git/pytorch/pytorch-linear/torch/lib/libtorch_cpu.so)
frame #5: at::_ops::qscheme::call(at::Tensor const&) + 0xbb (0x7f9daf7d944b in /home/pearu/git/pytorch/pytorch-linear/torch/lib/libtorch_cpu.so)
frame #6: at::native::empty_like_quantized(at::Tensor const&, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>, std::optional<c10::MemoryFormat>) + 0x29a (0x7f9daf2d652a in /home/pearu/git/pytorch/pytorch-linear/torch/lib/libtorch_cpu.so)
frame #7: <unknown function> + 0x30f0140 (0x7f9db0785140 in /home/pearu/git/pytorch/pytorch-linear/torch/lib/libtorch_cpu.so)
frame #8: at::_ops::empty_like::redispatch(c10::DispatchKeySet, at::Tensor const&, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>, std::optional<c10::MemoryFormat>) + 0xf7 (0x7f9dafbfedd7 in /home/pearu/git/pytorch/pytorch-linear/torch/lib/libtorch_cpu.so)
frame #9: <unknown function> + 0x4ce144d (0x7f9db237644d in /home/pearu/git/pytorch/pytorch-linear/torch/lib/libtorch_cpu.so)
frame #10: aoti_torch_call_dispatcher + 0x34c (0x7f9db2d210ac in /home/pearu/git/pytorch/pytorch-linear/torch/lib/libtorch_cpu.so)
frame #11: my_empty_like(torch::stable::Tensor) + 0x82 (0x7f9cec3a6582 in /home/pearu/git/pytorch/pytorch-linear/test/cpp_extensions/libtorch_agnostic_extension/install/home/pearu/miniconda3/envs/pytorch-cuda-dev/lib/python3.13/site-packages/libtorch_agnostic/_C.so)
<snip>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#161891 now defines STABLE_FORALL_SUPPORTED_SCALAR_TYPES that is used here instead of AT_FORALL_SCALAR_TYPES_WITH_COMPLEX_AND_QINTS.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More highlevel question: previously I believe we had decided that torchaudio doesn't actually need these headers. Has that changed? Are these needed now?
torchaudio code currently has 8 places where The alternative is to explicitly expand the That said, we could define as subset of these macros within torchaudio but it also has disadvantages. For instance, other similar porting efforts could possibly reuse these macros as well. |
@pearu do you know why torchaudio even needs to use the macro though? |
In general, these macros are used in compute kernels to support tensor inputs that dtype is in a set of supported dtypes. torchaudio test-suite typically uses float32 and float64 tensors + float16 in cuda-related routines. Hence, for torchaudio, the set of supported dtypes is |
Stack from ghstack (oldest at bottom):