New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
torch native functions cannot be used with inspect.signature #28233
Comments
This is a good idea. I would review a PR implementing this. |
Summary: Fixes #33182 This adds private API functions that developers of types that implement `__torch_function__` can use to ensure full coverage of the subset of the PyTorch API that can be overrided. I've refactored some of the code in the tests into a new `torch._overrides.get_overridable_functions` function. I've also changed `TENSOR_LIKE_TORCH_OVERRIDES` into `torch._overrides.get_testing_overrides` and `IGNORED_TORCH_FUNCTIONS` into `torch._overrides.get_ignored_functions`. Making these two static global variables in the tests into functions should allow rewriting their implementation to construct their return values instead of just statically defining the return value as is done here. Currently that is blocked on not being able to inspect function signatures of compiled kernels in PyTorch (see #28233). See the docs I've added for usage examples of these new functions. I also refactored the existing override tests to make use of these new functions, which should be a good forcing function to make sure they're kept up-to-date. Finally, while working on this I discovered that `TestTorchFunctionOverrides.test_mean` and `TestTorchFunctionOverrides.test_mm` weren't ever being run because they were getting clobbered by the other dynamically generated override tests. I fixed that by renaming the tests and then fixing the actual test code. I've verified that all the subclassing semantics is correct and that the updated test answers are correct. I'm happy to put the fixes to the existing tests in as a separate pull request if that would be easier to review. ping cpuhrsch since the feature request originally came from them. Pull Request resolved: #33791 Differential Revision: D20195053 Pulled By: cpuhrsch fbshipit-source-id: 1585f4e405f5223932b410eae03a288dc8eb627e
…rch#33791) Summary: Fixes pytorch#33182 This adds private API functions that developers of types that implement `__torch_function__` can use to ensure full coverage of the subset of the PyTorch API that can be overrided. I've refactored some of the code in the tests into a new `torch._overrides.get_overridable_functions` function. I've also changed `TENSOR_LIKE_TORCH_OVERRIDES` into `torch._overrides.get_testing_overrides` and `IGNORED_TORCH_FUNCTIONS` into `torch._overrides.get_ignored_functions`. Making these two static global variables in the tests into functions should allow rewriting their implementation to construct their return values instead of just statically defining the return value as is done here. Currently that is blocked on not being able to inspect function signatures of compiled kernels in PyTorch (see pytorch#28233). See the docs I've added for usage examples of these new functions. I also refactored the existing override tests to make use of these new functions, which should be a good forcing function to make sure they're kept up-to-date. Finally, while working on this I discovered that `TestTorchFunctionOverrides.test_mean` and `TestTorchFunctionOverrides.test_mm` weren't ever being run because they were getting clobbered by the other dynamically generated override tests. I fixed that by renaming the tests and then fixing the actual test code. I've verified that all the subclassing semantics is correct and that the updated test answers are correct. I'm happy to put the fixes to the existing tests in as a separate pull request if that would be easier to review. ping cpuhrsch since the feature request originally came from them. Pull Request resolved: pytorch#33791 Differential Revision: D20195053 Pulled By: cpuhrsch fbshipit-source-id: 1585f4e405f5223932b410eae03a288dc8eb627e
I am not sure if this is possible to do though? @overload
def add(input: Union[Tensor, Number], other: Union[Tensor, Number], *, alpha: Optional[Number]=1, out: Optional[Tensor]=None) -> Tensor: ...
@overload
def add(self: Tensor, alpha: Number, other: Tensor) -> Tensor: ...
@overload
def add(self: Tensor, alpha: Number, other: Tensor, *, out: Tensor) -> Tensor: ... So there isn't a single signature we can generate for this op. |
I think you can build signatures that are build up of more than one signature, using Looking at the code for https://github.com/python/cpython/blob/5c19050546e3e37a8889a0baa2954e1444e803d3/Lib/typing.py#L2550 So it may be possible to build a Union from all the signatures collected by override. |
Would it at least be possible to include all those overloads in the documentation? Right now |
@asmeurer that possible, and worth adding as another issue.
|
any update? |
🚀 Feature
It would be nice if native functions were annotated with the necessary metadata to allow runtime introspection via the
inspect
module.Motivation
Right now using e.g.
inspect.signature
with a function defined natively produces aValueError
:Pitch
It would be nice if the native function generation facilities in the pytorch build could create the function objects with the necessary metadata.
Cython is able to do this, so it's definitely possible:
Unfortunately I don't know enough about how cython does this or how python builtins can get annotated with the necessary metadata to point to further resources for doing this in pytorch.
Alternatives
Another way to do this would be for pytorch to make a public API that provides this information, since it's available in
native_functions.yaml
at build time. I think this is less nice since it should be possible to integrate with python's native introspection facilities.The text was updated successfully, but these errors were encountered: