Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test only smaller block_k for mm_plus_mm #96385

Closed
wants to merge 2 commits into from
Closed

Conversation

ngimel
Copy link
Collaborator

@ngimel ngimel commented Mar 9, 2023

@pytorch-bot
Copy link

pytorch-bot bot commented Mar 9, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/96385

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Merge Blocking SEVs

There is 1 active merge blocking SEVs. Please view them below:

If you must merge, use @pytorchbot merge -f.

❌ 1 Failures

As of commit fe000f8:

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

Copy link
Contributor

@bertmaher bertmaher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, thank you for figuring this out!

Just a few comments inline:

Comment on lines 79 to 80
# Splitting this into two loops causes an internal triton LLVM error
# https://github.com/openai/triton/issues/967
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment is stale now, right?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, deleted

Comment on lines 94 to 96
# rematerialize rm and rn to save registers
rm = pid_m * BLOCK_M + tl.arange(0, BLOCK_M)
rn = pid_n * BLOCK_N + tl.arange(0, BLOCK_N)
#rm = pid_m * BLOCK_M + tl.arange(0, BLOCK_M)
#rn = pid_n * BLOCK_N + tl.arange(0, BLOCK_N)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this rematerialization a bad idea now, or is it temporary? Probably we should either delete it (or drop in a comment describing why it's temporary).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Didn't see any difference with or without, deleted

(mat1, mat2, mat3, mat4),
layout,
**mm_options(config, k, layout),
if config.kwargs['BLOCK_K'] < k:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add a comment with a pointer to the triton issue so we can revisit someday.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added

@ngimel
Copy link
Collaborator Author

ngimel commented Mar 9, 2023

@pytorchbot merge -f "dla102 test flaky"

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

ydwu4 pushed a commit to ydwu4/pytorch that referenced this pull request Mar 10, 2023
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Mar 12, 2023
ydwu4 added a commit to ydwu4/pytorch that referenced this pull request Mar 13, 2023
Trim number of tested mm_plus_mm configs to work around triton-lang/triton#1298

Pull Request resolved: pytorch#96385
Approved by: https://github.com/bertmaher, https://github.com/jansel
@ngimel ngimel deleted the ngimel/mm_plus_mm_config branch March 14, 2023 06:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants