-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
make clamp_bounding_box a kernel / dispatcher hybrid #7227
Conversation
simple_tensor_loader = TensorLoader( | ||
fn=lambda shape, dtype, device: bounding_box_loader.fn(shape, dtype, device).as_subclass(torch.Tensor), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is quick and dirty. If we have more such cases in the future, we should have something like an unwrap
method or the like to get the plain tensor.
"`torch.Tensor`, but then the kernel (rightfully) complains that neither `format` nor " | ||
"`spatial_size` was passed" | ||
), | ||
condition=lambda arg_kwargs: isinstance(arg_kwargs.args[0], BoundingBoxLoader), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that this only applies to datapoints.BoundingBox
inputs. JIT works fine for simple tensor inputs.
if isinstance(batch, datapoints._datapoint.Datapoint): | ||
unbatched = type(batch).wrap_like(batch, unbatched) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small fix since this didn't respect datapoints types before. This was not an issue, since this is called from a kernel test and so far all kernels operated only with plain tensors. Meaning, all datapoints would have been unwrapped anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Philip, minor suggestion but LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK to me to improve the UX
Reviewed By: vmoens Differential Revision: D44416267 fbshipit-source-id: b3688a4c9d767b5e91c71dac77f44dedde65261b
Addresses #7215 (comment). If that one is approved, I can send a similar PR for
vision/torchvision/prototype/transforms/functional/_meta.py
Line 189 in 0aed832
cc @vfdev-5 @bjuncek