Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a new benchmark and document steps: Add a new unaligned matmul test that will exercise failsafes to avoid bad configurations #14052

Merged
merged 6 commits into from
Jun 12, 2023

Conversation

nicolasvasilache
Copy link
Contributor

@nicolasvasilache nicolasvasilache commented Jun 12, 2023

For future reference, here are the steps required to add a new matmul benchmark to IREE.
Most are described in build_tools/python/e2e_test_framework/models/README.md, this is a more hand-holdey version to add specific matmul benchmarks as IREE currently ads them:

Step 1. Add a new entry to model_groups.py

Step 2. Generate uuid with python:
```
python
import uuid
uuid.uuid4()
```
Step 3. Add an entry and plug UUID in build_tools/python/e2e_test_framework/unique_ids.py

Step 4. Run echo "$(date +'%Y%m%d')_$(date +'%s')" to get a date + timestamp.

Step 5. Add an entry to build_tools/python/e2e_test_framework/models/matmul.py using the date + timestamp directory name.

Step 6. Write the desired .mlir and generate an a .mlirbc with `iree-opt --emit-bytecode`

Step 7. Upload the .mlirbc to the GCS directory `https://storage.googleapis.com/iree-model-artifacts/microbenchmarks/matmul/timestamp/` with timestamp created in Step 5.

Step 8. Run `build_tools/scripts/generate_cmake_files.sh`

Commit everything.

@nicolasvasilache nicolasvasilache marked this pull request as ready for review June 12, 2023 09:22
… bad configurations

For future reference, here are the steps required to add a new matmul benchmark to IREE:

Step 1. Add a new entry to model_groups.py

Step 2. Generate uuid with python:
```
python
import uuid
uuid.uuid4()
```
Step 3. Add an entry and plug UUID in build_tools/python/e2e_test_framework/unique_ids.py

Step 4. Run echo "$(date +'%Y%m%d')_$(date +'%s')" to get a date + timestamp.

Step 5. Add an entry to build_tools/python/e2e_test_framework/models/matmul.py using the date + timestamp directory name.

Step 6. Write the desired .mlir and generate an a .mlirbc with `iree-opt --emit-bytecode`

Step 7. Upload the .mlirbc to the GCS directory `https://storage.googleapis.com/iree-model-artifacts/microbenchmarks/matmul/timestamp/` with timestamp created in Step 5.

Step 8. Run `build_tools/scripts/generate_cmake_files.sh`

Commit everything.
@nicolasvasilache nicolasvasilache changed the title Attempting to add a new benchmark and documenting steps: Add a new unaligned matmul test that will exercise failsafes to avoid bad configurations Add a new benchmark and document steps: Add a new unaligned matmul test that will exercise failsafes to avoid bad configurations Jun 12, 2023
@nicolasvasilache nicolasvasilache added the benchmarks:cuda Run default CUDA benchmarks label Jun 12, 2023
@nicolasvasilache nicolasvasilache enabled auto-merge (squash) June 12, 2023 10:50
@nicolasvasilache nicolasvasilache enabled auto-merge (squash) June 12, 2023 10:51
Copy link
Contributor

@mariecwhite mariecwhite left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for documenting the steps!

@nicolasvasilache nicolasvasilache merged commit 23666a6 into iree-org:main Jun 12, 2023
53 checks passed
@github-actions
Copy link

Abbreviated Benchmark Summary

@ commit 12d7e3ec56a8844cf9bea506c5eb23d7da21c9eb (no previous benchmark results to compare)

Raw Latencies

Benchmark Name Average Latency (ms) Median Latency (ms) Latency Standard Deviation (ms)
BertForMaskedLMTF(stablehlo) [cuda-sm\_80-linux\_gnu-cuda][default-flags] cuda(none)[full-inference,default-flags] with zeros @ a2-highgpu-1g[gpu] 7.148 7.119 0.096
BertLargeTF(stablehlo) [cuda-sm\_80-linux\_gnu-cuda][default-flags] cuda(none)[full-inference,default-flags] with zeros @ a2-highgpu-1g[gpu] 10.611 10.611 0.004
BertLargefp16PTBatch1(linalg) [cuda-sm\_80-linux\_gnu-cuda][default-flags] cuda(none)[full-inference,default-flags] with zeros @ a2-highgpu-1g[gpu] 5.854 5.830 0.078

[Top 3 out of 19 results showed]

No improved or regressed compilation metrics 🏖️

For more information:

Source Workflow Run

nhasabni pushed a commit to plaidml/iree that referenced this pull request Aug 24, 2023
…st that will exercise failsafes to avoid bad configurations (iree-org#14052)

For future reference, here are the steps required to add a new matmul
benchmark to IREE. Most are described in
`build_tools/python/e2e_test_framework/models/README.md`, this is a more
hand-holdey version to add specific matmul benchmarks as IREE currently
adds them:

    
    Step 1. Add a new entry to model_groups.py
    
    Step 2. Generate uuid with python:
    ```
    python
    import uuid
    uuid.uuid4()
    ```

    Step 3. Add an entry and plug UUID in `build_tools/python/e2e_test_framework/unique_ids.py`
    
    Step 4. Run `echo "$(date +'%Y%m%d')_$(date +'%s')"` to get a date + timestamp.
    
    Step 5. Add an entry to `build_tools/python/e2e_test_framework/models/matmul.py` using the date + timestamp directory name.
    
    Step 6. Write the desired .mlir and generate an a .mlirbc with `iree-opt --emit-bytecode`
    
    Step 7. Upload the .mlirbc to the GCS directory
`https://storage.googleapis.com/iree-model-artifacts/microbenchmarks/matmul/timestamp/`
with timestamp created in Step 5.
    
    Step 8. Run `build_tools/scripts/generate_cmake_files.sh`
    
    Step 9. Commit everything.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
benchmarks:cuda Run default CUDA benchmarks
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants