Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RELEASE] [AMD] Additional AMD cherry-picks #4175

Merged
merged 7 commits into from
Jun 20, 2024

Conversation

jataylo
Copy link
Contributor

@jataylo jataylo commented Jun 20, 2024

Cherry picks for release/3.0.x

General:

for RDNA:

Proton HIP PRs:

antiagainst and others added 7 commits June 19, 2024 09:52
…ang#4126)

We have identified a 20% perf regression in our downstream flash
attention perf kernel after switching to linear layout. Initial analysis
shows register pressure is increased to cause spills. Further analysis
is still ongoing.

So this commit introduces a minimal way to selectively disable linear
layout only on AMD backend to avoid affecting NVIDIA backend while
continuing bring it up on AMD side.

(cherry picked from commit e8bc45d)
This guards us against unsupported cases without asserts. Along the way
slightly improved the MFMA/WMMA doc a bit.

(cherry picked from commit 9a0a7c2)
This PR enables support of 3d dot for RDNA GPUs.

(cherry picked from commit 100e2aa)
- Pack bf16 elements to int16 vectors;
- Add a lit test;
- BF16 testcases from test_core.py::test_dot are passed for now;

The core Triton is a small number of people, and we receive many PRs
(thank
you!).  To help us review your code more quickly, **if you are a new
contributor (less than 3 PRs merged) we ask that you complete the
following
tasks and include the filled-out checklist in your PR description.**

Complete the following tasks before sending your PR, and replace `[ ]`
with
`[x]` to indicate you have done them.

- [x] I am not making a trivial change, such as fixing a typo in a
comment.

- [x] I have written a PR description following these
  [rules](https://cbea.ms/git-commit/#why-not-how).

- [x] I have run `pre-commit run --from-ref origin/main --to-ref HEAD`.

- Select one of the following.
  - [x] I have added tests.
    - `/test` for `lit` tests

- Select one of the following.
- [x] The `lit` tests I have added follow these [best
practices](https://mlir.llvm.org/getting_started/TestingGuide/#filecheck-best-practices),
including the "tests should be minimal" section. (Usually running Python
code
    and using the instructions it generates is not minimal.)

Signed-off-by: Ilya Veselov <iveselov.nn@gmail.com>
(cherry picked from commit 4a1ea8e)
1. Extract duplicated code into GPUProfiler.h
2. Track finished correlation ids for both cupti and amd profilers

(cherry picked from commit 328b86d)
…ng#4090)

Roctracer reports (global) agent ids for the location of async ops, e.g.
kernels and copies.
The profiler would be better suited with gpu indexes (zero based).

Created a mapping function to apply to values stored in
KernelMetric::DeviceId.

Caveat: if devices are hidden using HIP_VISIBLE_DEVICES then the hip
device id, e.g. via hipGetDevice()/hipSetDevice(), will not match the
reported unfiltered id. Additional support in hip will be needed to map
through the filtering correctly.

---------

Co-authored-by: Keren Zhou <robinho364@gmail.com>
(cherry picked from commit 60613fb)
This PR adds Proton HIP GPU utilization metrics and an associated test.

(cherry picked from commit c1776fa)
Copy link
Collaborator

@antiagainst antiagainst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually blocking for now to have some additional verification.

Copy link
Collaborator

@antiagainst antiagainst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are fine now. I don't think I can merge; @ThomasRaoux or @ptillet can you help to merge? Probably using rebase and merge to keep each commit separate for a clean history.

@ptillet ptillet merged commit 21eae95 into triton-lang:release/3.0.x Jun 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

9 participants