Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[mlir][tosa] Check for 0-ranked-tensors during fold #68512

Merged
merged 1 commit into from
Oct 20, 2023

Conversation

gptsarthak
Copy link
Contributor

Fixes #67761
Trying getDimSize() before checking for 0-ranked-tensors throws assert errors. This PR ensures that it is checked for.
Or should we throw an error if we have a 0-ranked-tensor in a tosa operation?

@github-actions
Copy link

github-actions bot commented Oct 8, 2023

✅ With the latest revision this PR passed the C/C++ code formatter.

@gptsarthak gptsarthak force-pushed the tosa-invalid-shape branch 2 times, most recently from 8079711 to 4531a8c Compare October 10, 2023 12:28
@GeorgeARM
Copy link
Contributor

GeorgeARM commented Oct 10, 2023

Looking again at the spec it says that reduce and reverse operators need to take as input tensor of rank 1 or higher. So the most appropriate solution here really is to change the TOSA operator specification to accept AnyNon0RankedOrUnrankedTensor in TOSAOps.td as an input. Thoughts @eric-k256?

@gptsarthak
Copy link
Contributor Author

Looking again at the spec it says that reduce and reverse operators need to take as input tensor of rank 1 or higher.

Where can I find the specs?

@GeorgeARM
Copy link
Contributor

GeorgeARM commented Oct 10, 2023

Looking again at the spec it says that reduce and reverse operators need to take as input tensor of rank 1 or higher.

Where can I find the specs?

Apologies for not adding it in my response. Here is the specification https://www.mlplatform.org/tosa/tosa_spec.html

Copy link
Contributor

@eric-k256 eric-k256 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this change is good. @GeorgeARM is right that the spec doesn't support rank 0 right now, but we're trying to review support of rank0 and we might change the restrictions on reductions to allow rank0 to occur with result == input as you do with this fold.

Just the one request for a better test name. Thanks!


// -----

// CHECK-LABEL: @rank_zero
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you make a change to make the name more descriptive? Maybe something like fold_reduce_rank_zero or something similar.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I renamed it to fold_reduce_rank_zero.

// CHECK-LABEL: @rank_zero
func.func nested @rank_zero() {
%0 = tensor.empty() : tensor<i32>
%1 = tosa.reduce_min %0 {axis = 0 : i32} : (tensor<i32>) -> tensor<1x10xi32>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also worth adding a CHECK-NOT label and ensure that the reduce_min and reverse ops are not present after the canonicalization

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added the CHECK-NOT labels for both the ops.

Trying getDimSize() before checking for 0-ranked-tensors throws assert errors.
Fixes llvm#67761
Copy link
Contributor

@GeorgeARM GeorgeARM left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@eric-k256 eric-k256 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, looks good to me.

@GeorgeARM
Copy link
Contributor

@gptsarthak was trying to merge the commit but I see the following 62467938+gptsarthak@users.noreply.github.com.
Is this intentional?

@gptsarthak
Copy link
Contributor Author

@GeorgeARM I don't understand what the issue is. Where do you see it?

@eric-k256
Copy link
Contributor

That is what you get if you have told GitHub to 'keep private email'. When someone goes to merge the change, GitHub will remove the email address from the commit and substitute something like the above. There is no existing policy in LLVM on this as far as I'm aware.

@gptsarthak
Copy link
Contributor Author

I have made my email public. It should no longer be an issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[mlir] --convert-amdgpu-to-rocdl crashed with assertion failure "invalid index for shaped type"
4 participants