Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

make clamp_bounding_box a kernel / dispatcher hybrid #7227

Merged
merged 8 commits into from
Feb 13, 2023

Conversation

pmeier
Copy link
Collaborator

@pmeier pmeier commented Feb 13, 2023

Addresses #7215 (comment). If that one is approved, I can send a similar PR for

def convert_format_bounding_box(

cc @vfdev-5 @bjuncek

Comment on lines +1995 to +1996
simple_tensor_loader = TensorLoader(
fn=lambda shape, dtype, device: bounding_box_loader.fn(shape, dtype, device).as_subclass(torch.Tensor),
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is quick and dirty. If we have more such cases in the future, we should have something like an unwrap method or the like to get the plain tensor.

"`torch.Tensor`, but then the kernel (rightfully) complains that neither `format` nor "
"`spatial_size` was passed"
),
condition=lambda arg_kwargs: isinstance(arg_kwargs.args[0], BoundingBoxLoader),
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that this only applies to datapoints.BoundingBox inputs. JIT works fine for simple tensor inputs.

Comment on lines +162 to +163
if isinstance(batch, datapoints._datapoint.Datapoint):
unbatched = type(batch).wrap_like(batch, unbatched)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Small fix since this didn't respect datapoints types before. This was not an issue, since this is called from a kernel test and so far all kernels operated only with plain tensors. Meaning, all datapoints would have been unwrapped anyway.

Copy link
Member

@NicolasHug NicolasHug left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Philip, minor suggestion but LGTM

torchvision/prototype/transforms/functional/_meta.py Outdated Show resolved Hide resolved
Copy link
Collaborator

@vfdev-5 vfdev-5 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK to me to improve the UX

@pmeier pmeier merged commit 0316ed1 into pytorch:main Feb 13, 2023
@pmeier pmeier deleted the clamp-bbox-dipatcher branch February 13, 2023 12:33
facebook-github-bot pushed a commit that referenced this pull request Mar 28, 2023
Reviewed By: vmoens

Differential Revision: D44416267

fbshipit-source-id: b3688a4c9d767b5e91c71dac77f44dedde65261b
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants