Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CUDA] [Codegen] Ensuring atleast one thread block to handle empty tensor #7273

Merged
merged 1 commit into from Jan 14, 2021

Conversation

anijain2305
Copy link
Contributor

Topk was failing on CUDA when k is a var and its value is 0 at runtime. At closer inspection I found that there are 0 thread blocks at runtime. This PR ensures that there is atleast 1 thread block.

@anijain2305
Copy link
Contributor Author

@kevinthesun @masahi @mbrookhart @zhiics @trevor-m Please review.

@anijain2305 anijain2305 changed the title [CUDA] [Codegen] Ensuring atleast one thread block for dynamism [CUDA] [Codegen] Ensuring atleast one thread block to handle empty tensor Jan 13, 2021
@masahi
Copy link
Member

masahi commented Jan 13, 2021

hmm, I think I've already added a fix for such cases, here:

if attr_key == "thread_extent":
value = op.max(1, value)

Do you know why it is not working? cc @mbrookhart

@anijain2305
Copy link
Contributor Author

anijain2305 commented Jan 13, 2021

hmm, I think I've already added a fix for such cases, here:

if attr_key == "thread_extent":
value = op.max(1, value)

Do you know why it is not working? cc @mbrookhart

Is this because this the lines that you suggested are specific to IR Builder, while the failure that I see is for injective schedule? My failures was coming for an injective schedule.

@mbrookhart
Copy link
Contributor

Yeah, I think this change catches it at a lower level. We might not need the ir_builder change after this.

Copy link
Contributor

@kevinthesun kevinthesun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@mbrookhart mbrookhart left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Member

@masahi masahi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants