Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'-k' flag does not respect '--max-kernel-num' #187

Closed
skyreflectedinmirrors opened this issue Sep 19, 2023 · 2 comments
Closed

'-k' flag does not respect '--max-kernel-num' #187

skyreflectedinmirrors opened this issue Sep 19, 2023 · 2 comments
Assignees
Labels
bug Something isn't working

Comments

@skyreflectedinmirrors
Copy link
Contributor

Describe the bug

I want to analyze the 11th most expensive kernel in my application.
First I was confused why --list-kernels didn't go past 10, but then I realized there is a --max-kernel-num option to extend this list.
However, even with this applied, I cannot actually select anything past 10 to analyze:

$ omniperf analyze -p workloads/<foo>/mi200/ --max-kernel-num 30 -k 16
--------
Analyze
--------

16 is an invalid kernel filter. Must be between 0-9.

Development Environment:

  • Linux Distribution: RHEL8
  • Omniperf Version: 1.10.0
  • GPU: Mi200
  • Thera

To Reproduce
Steps to reproduce the behavior:

  1. Profile any workload with > 10 kernels
  2. Run analyze on e.g., the 11th kernel

Expected behavior

IMO, we should:

  1. Make the --list-kernels option ignore the max kernel number. If a user is asking for the full list of kernels, they probably want the whole thing, and not to see only the top 10.
  2. If the user is asking for a specific kernel (e.g., -k 11), just give it to them (or report that it's out of bounds), rather than limiting it to --max-kernel-num

Essentially --max-kernel-num should only be used for expanding the number of kernels under consideration for analyzing and displaying stats over many kernels.

@skyreflectedinmirrors skyreflectedinmirrors added the bug Something isn't working label Sep 19, 2023
@skyreflectedinmirrors
Copy link
Contributor Author

skyreflectedinmirrors commented Sep 19, 2023

https://github.com/AMDResearch/omniperf/blob/cc5bba19f4ed60310371ca2cea0980f461614f9b/src/omniperf_analyze/omniperf_analyze.py#L303

right now the analyze check is hard-coded here^. I was able to verify that changing the check to >= args.max_kernel_num at least let me see > 10 kernels, but that doesn't address the "expected" behavior

coleramos425 added a commit that referenced this issue Oct 11, 2023
@coleramos425
Copy link
Collaborator

Thanks for the suggestion. I agree 100%. I've made the relevant changes and functionality now mirrors that which you outlined in "Expected behavior"

@coleramos425 coleramos425 self-assigned this Oct 11, 2023
coleramos425 added a commit to jasonray/omniperf that referenced this issue Mar 5, 2024
coleramos425 added a commit that referenced this issue Mar 5, 2024
* Lock Pandas to version 1.4.3 for tutorial

Signed-off-by: colramos-amd <colramos@amd.com>

* Establish connection between kernel related flags in Analyze Mode (#187)

Signed-off-by: colramos-amd <colramos@amd.com>

* Fixing bug in kernel/dispatch filter pytest

Signed-off-by: colramos-amd <colramos@amd.com>

* More CI logic errors fixed

Signed-off-by: colramos-amd <colramos@amd.com>

* Update CHANGES

Signed-off-by: Cole Ramos <colramos@amd.com>

* Update CHANGES

Signed-off-by: Cole Ramos <colramos@amd.com>

* Update README.md

Signed-off-by: Cole Ramos <colramos@amd.com>

* bump statman-stopwatch to 2.18

Signed-off-by: Jason Ray <jayray.net@gmail.com>

* Fix merge mistakes

Signed-off-by: colramos-amd <colramos@amd.com>

---------

Signed-off-by: colramos-amd <colramos@amd.com>
Signed-off-by: Cole Ramos <colramos@amd.com>
Signed-off-by: Jason Ray <jayray.net@gmail.com>
Co-authored-by: colramos-amd <colramos@amd.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants