Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix mkldnn_matmul error on AArch64 #110150

Closed
wants to merge 1 commit into from

Conversation

imzhuhl
Copy link
Contributor

@imzhuhl imzhuhl commented Sep 27, 2023

Fixes #110149

@pytorch-bot
Copy link

pytorch-bot bot commented Sep 27, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/110150

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit 69f60f3 with merge base a51b8df (image):

UNSTABLE - The following job failed but was likely due to flakiness present on trunk and has been marked as unstable:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added the release notes: linalg_frontend release notes category label Sep 27, 2023
@imzhuhl
Copy link
Contributor Author

imzhuhl commented Sep 27, 2023

The reason is that when the last dimension of c is 1, c will not be marked as transpose, thus m1 and m2 will not be swapped. We should use mkldnn_matmul(a, b, ... but not mkldnn_matmul(b, a, ... .

If last dimension of c is 1, there is no need to use ACL, the performance of BLAS(OpenBLAS) is good. Another reason is that this mkldnn_matmul was added to support bf16 gemm on AArch64, but here it's a matrix multiply a vector, and it's unable to take advantage of the performance gains from bfmmla(bf16 matrix multiplication instructions).

@lezcano lezcano removed their request for review September 27, 2023 12:30
@drisspg drisspg added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Sep 27, 2023
@imzhuhl
Copy link
Contributor Author

imzhuhl commented Oct 12, 2023

hi @jgong5, do I need other reviewers to approve to merge the pr?

@jgong5
Copy link
Collaborator

jgong5 commented Oct 13, 2023

hi @jgong5, do I need other reviewers to approve to merge the pr?

Let me see.

@jgong5
Copy link
Collaborator

jgong5 commented Oct 13, 2023

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Oct 13, 2023
@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: Approval needed from one of the following:
nkaretnikov

Details for Dev Infra team Raised by workflow job

Failing merge rule: Core Maintainers

@imzhuhl
Copy link
Contributor Author

imzhuhl commented Oct 13, 2023

Hi, @nkaretnikov, looks like it needs your approval.

@imzhuhl
Copy link
Contributor Author

imzhuhl commented Oct 13, 2023

@pytorchbot merge

Thanks, but merge failed. @jgong5

Copy link
Collaborator

@peterbell10 peterbell10 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't ideal but seems okay as a hotfix.

@lezcano
Copy link
Collaborator

lezcano commented Oct 13, 2023

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@luiscosio
Copy link

Where is this merged @imzhuhl

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk Trigger trunk jobs on your pull request Merged open source release notes: linalg_frontend release notes category triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

nn.Linear forward error on AArch64 if the out_features equals to 1
8 participants