Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement narrow from a regular tensor to jagged tensor #112770
Implement narrow from a regular tensor to jagged tensor #112770
Changes from 2 commits
0aa3aef
e8a7a62
85322ba
b6c865d
b5f52a2
dade183
9966fcb
28989db
d48db5a
08fb0a6
9db790c
11ca324
ba6c2b3
03a94ee
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
extreme nit: exclusive upper bound for interval
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: None default should indicate strided, as is consistent with the behavior in other places (e.g.
nested_tensor()
/as_nested_tensor()
).also is None being handled? I didn't see it but I may have just missed it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is being handled with a RuntimeError
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note: this will go away when we introduce proper dense -> jagged views, which I'm working on. No action needed for this PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(I believe we removed the need for
x 1
in the logic)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should add the "final offset" to match the old jagged offsets format with shape
B + 1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This new type won't be too useful today as you won't actually be able to register two funcs to the same aten op overload even if those ops have different schemas.
For now, what you probably want to do is just to branch inside whatever op you are trying to implement.
Perhaps in the future we want to go the route of writing a general dispatching mechanism, I'm not sure. Or its also entirely possible that we'd revert the t vs jt distinction as well. We don't have many ops implemented, so I think its too early to decide today.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah... I was trying to get out of adding an if statement to every single function we implement, but instead I will change the register_func call to accept a parameter of whether non-contiguous tensors are allowed or not (given most functions won't accept them at all)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and then add the two paths for the ones that allow them
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
on second thought, "jt_nc" might not be the best name, but it is the one that allows both the contiguous and noncont. versions to pass though to the actual kernel code vs "jt" only letting the contiguous ones pass to maintain compatibility
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah I'm okay with your initial approach. I did something similar to that locally when I was playing around with this and it minimized the changes needed to existing op impls