{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":455825838,"defaultBranch":"shark","name":"SRT","ownerLogin":"nod-ai","currentUserCanPush":false,"isFork":true,"isEmpty":false,"createdAt":"2022-02-05T09:21:28.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/105611245?v=4","public":true,"private":false,"isOrgOwned":true},"refInfo":{"name":"","listCacheKey":"v0:1720865013.0","currentOid":""},"activityList":{"items":[{"before":"2ed3f92c6a451b0f17f94f1391c2ec7f3c1defe4","after":"9d6b425c81254d6fc8e18057188c26609320fab4","ref":"refs/heads/main","pushedAt":"2024-07-13T17:01:02.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"powderluv","name":null,"path":"/powderluv","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/74956?s=80&v=4"},"commit":{"message":"[spirv] Push GPU target conversion to before SPIR-V conversion (#17816)\n\nThis commit moves the `SPIRVConvertGPUTargetPass` to right before the\r\n`ConvertToSPIRVPass` in the pipeline. This makes sure we use the same\r\n`#iree_gpu.target` in the majority of the configuration and lowering\r\npasses in the CodeGen flow, and scopes the SPIR-V target environment to\r\nonly the final SPIR-V conversion. With this, we are able to unify and\r\nsimplify lots of SPIR-V tests.\r\n\r\nProgress towards https://github.com/iree-org/iree/issues/16341\r\n\r\nci-extra:\r\ntest_nvidia_gpu,test_nvidia_a100,test_amd_mi250,test_amd_w7900,build_test_all_macos_arm64,build_and_test_android\r\n\r\n---------\r\n\r\nSigned-off-by: Lei Zhang ","shortMessageHtmlLink":"[spirv] Push GPU target conversion to before SPIR-V conversion (iree-…"}},{"before":"be461bd0c17d9e607a316b8312bdc0f62298f581","after":"2ed3f92c6a451b0f17f94f1391c2ec7f3c1defe4","ref":"refs/heads/main","pushedAt":"2024-07-13T05:01:12.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"powderluv","name":null,"path":"/powderluv","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/74956?s=80&v=4"},"commit":{"message":"Add nop pass to different backend.\n\nSigned-off-by: Alan Li ","shortMessageHtmlLink":"Add nop pass to different backend."}},{"before":"44808e143533e694644da13b75a8f8358ee289cf","after":"be461bd0c17d9e607a316b8312bdc0f62298f581","ref":"refs/heads/main","pushedAt":"2024-07-13T00:03:05.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"powderluv","name":null,"path":"/powderluv","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/74956?s=80&v=4"},"commit":{"message":"[LLVMGPU] Support CastTypeToFitMMA on TransformDialect script. (#17884)\n\nPreviously CastTypeToFitMMA relies on the `mma_schedule` attribute on\r\nthe function's translationInfo to obtain information about\r\n`iree.amdgpu.mma`(intrnisic selected).\r\n\r\nWhile this is fine for C++ pipeline, the IR generated from\r\nTransformDialect script do not have such information. Instead IR\r\ngenerated in TD script typically annotate the\r\n`iree.amdgpu.mma`(intrnisic selected) directly on the\r\nvector.contractOps.\r\n\r\nThis is a crucial part of enabling performant the latest attention\r\ncompilation pipeline (with online attn + transpose fusion) which is\r\nbased on TD scripts.\r\n\r\n---------\r\n\r\nCo-authored-by: Kunwar Grover ","shortMessageHtmlLink":"[LLVMGPU] Support CastTypeToFitMMA on TransformDialect script. (iree-…"}},{"before":"02c2000795e157e4cf63fbac89d21a1ed886a7b0","after":"44808e143533e694644da13b75a8f8358ee289cf","ref":"refs/heads/main","pushedAt":"2024-07-12T23:01:21.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"powderluv","name":null,"path":"/powderluv","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/74956?s=80&v=4"},"commit":{"message":"Add in-tree special_models test suite using reworked iree-tooling. (#17883)\n\nWith this, we move away from using all the specialized json config files\r\nand complex workflows.\r\nInstead, we use python scripts which allow us to use custom flags,\r\ntolerances, and configurations based on the backend/model.\r\nRelated PR in TestSuite:\r\nhttps://github.com/nod-ai/SHARK-TestSuite/pull/271\r\n\r\nThis PR also removes all dependencies on SHARK-TestSuite tooling.\r\nReworked the tools here so that downloading, caching, testing, and\r\nbenchmarking occurs as intended with tools solely from this repo for\r\niree_special_models. Whenever we are adding test files here, the goal is\r\nfor an IREE user to be able to clone the repo and run the run tests\r\nknowing nothing about the SHARK-TestSuite .\r\n\r\nAlso didn't realize, but ireers here already has a process of stamping\r\nhere to check if a file is already produced. I think we have to remove\r\nthis because it will skip even if there is a newer version of the file\r\navailable and there's really no point when downloading to a cache\r\nbecause once it's there, it is never removed so not a valuable signal.\r\n\r\n(Third times the charm. Had to close the last two versions of this PR\r\nbecause couldn't get passed a pre-commit check that led me to rebase and\r\nadd a bunch of commits that weren't mine 🤦 )\r\n\r\nci-exactly: build_all, test_amd_mi300, build_packages, regression_test\r\n\r\n---------\r\n\r\nSigned-off-by: saienduri ","shortMessageHtmlLink":"Add in-tree special_models test suite using reworked iree-tooling. (i…"}},{"before":"10dfd9d337aef388cdaa725514acdcf0b7f4a3ee","after":"02c2000795e157e4cf63fbac89d21a1ed886a7b0","ref":"refs/heads/main","pushedAt":"2024-07-12T22:01:23.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"powderluv","name":null,"path":"/powderluv","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/74956?s=80&v=4"},"commit":{"message":"Revert \"[LLVMGPU][ROCm] Add MFMA_F32_16x16x4_F32 instruction\" (#17894)\n\nReverts iree-org/iree#17847\r\n\r\nThis broke SDXL rocm pipeline tests on mi300, see\r\nhttps://github.com/iree-org/iree/pull/17847#issuecomment-2226327936. The\r\ntests aren't showing error messages (`root:benchmark_sdxl_rocm.py:31\r\nCommand failed with error: b''`) so I can't easily tell what the issue\r\nis, https://github.com/nod-ai/SHARK-TestSuite/issues/286 is filed to\r\nimprove the situation there.","shortMessageHtmlLink":"Revert \"[LLVMGPU][ROCm] Add MFMA_F32_16x16x4_F32 instruction\" (iree-o…"}},{"before":"6df0372e67b022f021668f3da1c8d9f13495d2f6","after":"10dfd9d337aef388cdaa725514acdcf0b7f4a3ee","ref":"refs/heads/main","pushedAt":"2024-07-12T21:01:23.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"powderluv","name":null,"path":"/powderluv","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/74956?s=80&v=4"},"commit":{"message":"[Flow] Improve dispatch name categorization around broadcast/transpose (#17890)\n\nThe dispatch names are largely to tell us\r\n1) What kind of computation it is and\r\n2) What did fusion come up with\r\n\r\nThis patch changes the way that broadcast and transpose is labeled to\r\nreflect what we want to know about each dispatch. Essentially, it tries\r\nto categorize dispatches as follows:\r\n\r\nElementwise: Dispatches that are pure elementwise (identity) maps with\r\npotentially some minor transposed/broadcasted operands. This indicates\r\nthat the core memory bound operands are pure elementwise.\r\n\r\nTranspose: Same as elementwise except either the input or output maps\r\nare permuted. This indicates that there is data movement happening.\r\n\r\nBroadcast: Cases where the input maps are all strict projections of the\r\noutput maps. This should only ever appear if something in fusion went\r\noff the rails.","shortMessageHtmlLink":"[Flow] Improve dispatch name categorization around broadcast/transpose ("}},{"before":"f0d24cdab1bb931e54a9d517707375cb96f96543","after":"6df0372e67b022f021668f3da1c8d9f13495d2f6","ref":"refs/heads/main","pushedAt":"2024-07-12T17:01:22.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"powderluv","name":null,"path":"/powderluv","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/74956?s=80&v=4"},"commit":{"message":"Integrate llvm-project @56069ab1a35e74d0d8d632121e1891d41cb56a2d (#17862)\n\nDrop reverted commits:\r\n\r\n\r\nhttps://github.com/llvm/llvm-project/commit/fe82af3d2d9b0487e281b9349c61d2831594469f\r\n\r\nhttps://github.com/llvm/llvm-project/commit/4a7695e8d2aefa57e2beb4013dad0300333f6d16\r\n\r\n---------\r\n\r\nSigned-off-by: yzhang93 ","shortMessageHtmlLink":"Integrate llvm-project @56069ab1a35e74d0d8d632121e1891d41cb56a2d (ire…"}},{"before":"05dfe0b66d6ee4c3390301cb37364bf6f89fe872","after":"f0d24cdab1bb931e54a9d517707375cb96f96543","ref":"refs/heads/main","pushedAt":"2024-07-12T16:01:30.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"powderluv","name":null,"path":"/powderluv","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/74956?s=80&v=4"},"commit":{"message":"[Global opt] add flag to generalize matmul ops (#17877)\n\nHelps when the producer is a broadcast op. After adding the flag to sdxl scripts, I saw a decent decrease in the\r\nnumber of dispatches.\r\n\r\nInitially, I was trying to manually generalize+fuse broadcasts [branch\r\nhere](https://github.com/IanWood1/iree/tree/broadcast_matmul), but quinn\r\nsaw good results with just this.\r\n\r\n---------\r\n\r\nSigned-off-by: Ian Wood ","shortMessageHtmlLink":"[Global opt] add flag to generalize matmul ops (iree-org#17877)"}},{"before":"f07c96c0e244c9c05ae28268af9d1811ac7488d9","after":"05dfe0b66d6ee4c3390301cb37364bf6f89fe872","ref":"refs/heads/main","pushedAt":"2024-07-12T15:01:22.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"powderluv","name":null,"path":"/powderluv","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/74956?s=80&v=4"},"commit":{"message":"Add a Flow specific canonicalizer pass (#17836)\n\nCertain patterns that are borderline canonicalizations or better suited\r\nas a canonical form for certain phases benefit from having a phase\r\nspecific canonicalization pass. The only pattern added here for now is\r\nconsecutive insert/extract slice folding which is always beneficial in\r\nFlow, but not in Codegen.","shortMessageHtmlLink":"Add a Flow specific canonicalizer pass (iree-org#17836)"}},{"before":"65a7bd05e5201492daa496af88efb6eaa227cf25","after":"f07c96c0e244c9c05ae28268af9d1811ac7488d9","ref":"refs/heads/main","pushedAt":"2024-07-12T01:27:18.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"powderluv","name":null,"path":"/powderluv","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/74956?s=80&v=4"},"commit":{"message":"Round up sdxl golden dispatch sizes and mi250 times by 10%. (#17879)\n\nSee [this discussion on\r\nDiscord](https://discord.com/channels/689900678990135345/689957613152239638/1261059802214305913).\r\n\r\nThese are change detector tests with _some_ use, but exact checking is\r\ntoo noisy and is causing significant churn across PRs and postsubmit CI\r\nruns. We could change the test script to allow a certain error range,\r\nbut for now just increase the thresholds used in `<=` checks by about\r\n10% (then drop a few digits to 0 so we don't give the appearance of high\r\nprecision).\r\n\r\nci-exactly: build_packages,regression_test","shortMessageHtmlLink":"Round up sdxl golden dispatch sizes and mi250 times by 10%. (iree-org…"}},{"before":"9d2d7668420ebb7f060b8c0a8f3b770fd97ab97c","after":"65a7bd05e5201492daa496af88efb6eaa227cf25","ref":"refs/heads/main","pushedAt":"2024-07-11T23:02:21.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"powderluv","name":null,"path":"/powderluv","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/74956?s=80&v=4"},"commit":{"message":"Bump goldensize values for sdxl benchmarks. (#17873)\n\nci-exactly: build_packages,regression_test","shortMessageHtmlLink":"Bump goldensize values for sdxl benchmarks. (iree-org#17873)"}},{"before":"20d830887dd478dfb467d5f657afc9daea3d7baf","after":"9d2d7668420ebb7f060b8c0a8f3b770fd97ab97c","ref":"refs/heads/main","pushedAt":"2024-07-11T20:01:15.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"powderluv","name":null,"path":"/powderluv","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/74956?s=80&v=4"},"commit":{"message":"[LinalgExt] Adding IndexingMaps to linalg_ext.attentionOp (#17864)\n\nIn order to make fusion with other generics, specifically transpose\r\neasier, we introduce affineMaps/indexingMaps to linalg_ext.attentionOp.\r\nWith that we are also enforcing the number and types of dpsInputs. We\r\nare also removing \"transpose_V\" attribute in lieu of infering from\r\nindexingMaps.\r\n\r\n---------\r\n\r\nCo-authored-by: Kunwar Grover ","shortMessageHtmlLink":"[LinalgExt] Adding IndexingMaps to linalg_ext.attentionOp (iree-org#1…"}},{"before":"429aafd877536c0efca7cd0865c98b3692ec9728","after":"20d830887dd478dfb467d5f657afc9daea3d7baf","ref":"refs/heads/main","pushedAt":"2024-07-11T19:01:14.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"powderluv","name":null,"path":"/powderluv","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/74956?s=80&v=4"},"commit":{"message":"[GPU] Fix the propagation control function logic. (#17869)\n\nThis is a follow-up from the integrate comment:\r\nhttps://github.com/iree-org/iree/pull/17827#discussion_r1671197583\r\n\r\nThe comment was not addressed in the integrate PR because the CI was not\r\nstable. Some runners were off, and we decided to land the PR and sent a\r\nfollow-up later. See\r\nhttps://discord.com/channels/689900678990135345/1080178290188374049/1260330876651307090\r\nfor more details.","shortMessageHtmlLink":"[GPU] Fix the propagation control function logic. (iree-org#17869)"}},{"before":"85e0da62ba6f1dce6dcb061c43c0c8f96f4c700e","after":"429aafd877536c0efca7cd0865c98b3692ec9728","ref":"refs/heads/main","pushedAt":"2024-07-11T17:01:16.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"powderluv","name":null,"path":"/powderluv","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/74956?s=80&v=4"},"commit":{"message":"[Codegen] Improve ROCm-specific LLVM translations (#17742)\n\nUse upstream's translations for attributes like rocdl.kernel to reduce\r\nredundancy.\r\n\r\nFix the parsing of chipset versions (the last two digits are in base 16)\r\n\r\nSigned-off-by: Krzysztof Drewniak ","shortMessageHtmlLink":"[Codegen] Improve ROCm-specific LLVM translations (iree-org#17742)"}},{"before":"c1611cd045da2d4fed190a0858d075305bdd560b","after":"85e0da62ba6f1dce6dcb061c43c0c8f96f4c700e","ref":"refs/heads/main","pushedAt":"2024-07-11T05:01:15.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"powderluv","name":null,"path":"/powderluv","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/74956?s=80&v=4"},"commit":{"message":"Adding some CTS variants for indirect command buffers. (#17846)\n\nThis is a simple start that tests a few commands to test the\r\ninfrastructure for record/replay of indirect command buffers and the\r\nvalidation mechanism. We don't have a good way in the CTS yet to\r\nconditionally run tests based on whether validation is compiled into the\r\nbuild but I'll be looking into that for testing failure cases in future\r\nPRs (for now I've just tested manually to verify errors are propagated).","shortMessageHtmlLink":"Adding some CTS variants for indirect command buffers. (iree-org#17846)"}},{"before":"b67fef7f1bef57785caa0dc12d4d5db2c84a7616","after":"c1611cd045da2d4fed190a0858d075305bdd560b","ref":"refs/heads/main","pushedAt":"2024-07-11T03:01:20.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"powderluv","name":null,"path":"/powderluv","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/74956?s=80&v=4"},"commit":{"message":"Deflake LoopTest.WaitAnyBlocking and WaitAllBlocking. (#17863)\n\nMissed these in https://github.com/iree-org/iree/pull/17857 since I only\r\nsaw `WaitOneBlocking` flake on CI. The other test cases can flake too.","shortMessageHtmlLink":"Deflake LoopTest.WaitAnyBlocking and WaitAllBlocking. (iree-org#17863)"}},{"before":"3b2c85b537089cd6585ff36cc6476ef9759fdd07","after":"b67fef7f1bef57785caa0dc12d4d5db2c84a7616","ref":"refs/heads/main","pushedAt":"2024-07-11T01:27:21.000Z","pushType":"push","commitsCount":3,"pusher":{"login":"powderluv","name":null,"path":"/powderluv","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/74956?s=80&v=4"},"commit":{"message":"Bump goldentime values for mi250 sdxl benchmarks. (#17860)\n\nSeeing some variance causing CI failures with unrelated changes:\r\n*\r\nhttps://github.com/iree-org/iree/actions/runs/9879301822/job/27286342282#step:15:46\r\n * `VAE Decode Benchmark Time: 295.0 ms (golden time 291.0 ms)`\r\n * `Prompt Encoder Benchmark Time: 17.2 ms (golden time 17.0 ms)`\r\n*\r\nhttps://github.com/iree-org/iree/actions/runs/9877397187/job/27279465007\r\n * `VAE Decode Benchmark Time: 295.0 ms (golden time 291.0 ms)`\r\n * `Prompt Encoder Benchmark Time: 17.1 ms (golden time 17.0 ms)`\r\n\r\nAside: we might want to move the benchmarks to a `pkgci_benchmarks.yml`\r\nworkflow instead of chaining them on to the end of\r\n`pkgci_regression_test.yml`.\r\n\r\nci-exactly: build_packages,regression_test","shortMessageHtmlLink":"Bump goldentime values for mi250 sdxl benchmarks. (iree-org#17860)"}},{"before":"e794ce877128f409d6f9a18c77e7a58760f1c85f","after":"3b2c85b537089cd6585ff36cc6476ef9759fdd07","ref":"refs/heads/main","pushedAt":"2024-07-11T00:03:07.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"powderluv","name":null,"path":"/powderluv","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/74956?s=80&v=4"},"commit":{"message":"Drop unmaintained transform dialect tests (#17858)\n\nThe tests in `tests/transform_dialect` have been unmaintained (and\r\nmostly disabled) for a while. Most of the tests were misplaced anyway,\r\nmixing lit tests in with e2e tests, and much of the relevant\r\nfunctionality is now tested upstream or has evolved into different lit\r\ntests in Codegen.\r\n\r\nThis drops the entire `cuda` directory, and drops all `cpu` tests except\r\none which is testing a transform dialect library call, which is still a\r\nrelevant e2e test.","shortMessageHtmlLink":"Drop unmaintained transform dialect tests (iree-org#17858)"}},{"before":"d174e8bcec9f221082511c67111b3f995bdd54a0","after":"e794ce877128f409d6f9a18c77e7a58760f1c85f","ref":"refs/heads/main","pushedAt":"2024-07-10T23:01:05.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"powderluv","name":null,"path":"/powderluv","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/74956?s=80&v=4"},"commit":{"message":"Simplify tests/e2e/linalg_ext_ops. (#17856)\n\nForked from https://github.com/iree-org/iree/pull/17766\r\n\r\n* Share test srcs lists between:\r\n * Vulkan and Metal (both using SPIR-V codegen)\r\n * CUDA and ROCm (both using LLVMGPU codegen)\r\n* Enable `winograd_input.mlir` and `winograd_output.mlir` tests on more\r\nbackends\r\n* Add Metal and ROCm/HIP tests\r\n* Skip wasm tests using a label instead of CMake branching","shortMessageHtmlLink":"Simplify tests/e2e/linalg_ext_ops. (iree-org#17856)"}},{"before":"534928d35b089475b76cd52c444dc79e601f8869","after":"d174e8bcec9f221082511c67111b3f995bdd54a0","ref":"refs/heads/main","pushedAt":"2024-07-10T19:01:15.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"powderluv","name":null,"path":"/powderluv","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/74956?s=80&v=4"},"commit":{"message":"Disable failing Pixel6/Vulkan stablehlo_ops tests again. (#17851)\n\nPatching over the regressions reported here:\r\nhttps://github.com/iree-org/iree/pull/17843#issuecomment-2221070467\r\n\r\nI'm not sure we should still be testing on Pixel 6 though. Might want to\r\nswitch to a newer phone or drop those tests down a support level.","shortMessageHtmlLink":"Disable failing Pixel6/Vulkan stablehlo_ops tests again. (iree-org#17851"}},{"before":"78c0051892a87f183611abc786f72c0da0e44668","after":"534928d35b089475b76cd52c444dc79e601f8869","ref":"refs/heads/main","pushedAt":"2024-07-10T18:01:10.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"powderluv","name":null,"path":"/powderluv","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/74956?s=80&v=4"},"commit":{"message":"[LLVMGPU] Add debug print for contraction problem size. NFC. (#17845)\n\nSigned-off-by: Jakub Kuderski ","shortMessageHtmlLink":"[LLVMGPU] Add debug print for contraction problem size. NFC. (iree-or…"}},{"before":"9ac10156d7cbc1990e616be02c9a2c0d1e8ffacb","after":"78c0051892a87f183611abc786f72c0da0e44668","ref":"refs/heads/main","pushedAt":"2024-07-10T17:01:07.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"powderluv","name":null,"path":"/powderluv","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/74956?s=80&v=4"},"commit":{"message":"Simplify tests/e2e/stablehlo_ops. (#17843)\n\nForking this from https://github.com/iree-org/iree/pull/17766 to just\r\nlook at a single directory.\r\n\r\n* Moved Metal and ROCm tests from being exclusively defined in CMake to\r\nbeing defined (but then no-op'd) in Bazel\r\n* Taught the test function to insert\r\n`--iree-rocm-target-chip=${IREE_HIP_TEST_TARGET_CHIP}` (not happy that\r\nthis is required though)\r\n* Merged test srcs down to a single `ALL_SRCS` glob for test suites that\r\nwork across all configurations\r\n* Enabled previously disabled tests\r\n * Fixes https://github.com/iree-org/iree/issues/9583\r\n* Fixes https://github.com/iree-org/iree/issues/12415 (maybe, might have\r\nto disable those tests on Android/Vulkan again)","shortMessageHtmlLink":"Simplify tests/e2e/stablehlo_ops. (iree-org#17843)"}},{"before":"6f25718d57bbb6e934e489b617e36b43e3427ae2","after":"9ac10156d7cbc1990e616be02c9a2c0d1e8ffacb","ref":"refs/heads/main","pushedAt":"2024-07-10T06:01:19.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"powderluv","name":null,"path":"/powderluv","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/74956?s=80&v=4"},"commit":{"message":"Enable MI300 CI testing. (#17842)\n\nThis commit enables mi300 gpu and model testing.\r\n\r\nci-exactly: build_all, test_amd_mi300, build_packages, regression_test\r\n\r\n---------\r\n\r\nSigned-off-by: saienduri \r\nCo-authored-by: Scott Todd ","shortMessageHtmlLink":"Enable MI300 CI testing. (iree-org#17842)"}},{"before":"8513e5fea1fdf521825603446e45ff5b4bb51384","after":"6f25718d57bbb6e934e489b617e36b43e3427ae2","ref":"refs/heads/main","pushedAt":"2024-07-10T05:01:21.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"powderluv","name":null,"path":"/powderluv","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/74956?s=80&v=4"},"commit":{"message":"Switching HAL CTS to use TEST_F. (#17844)\n\nThis allows tests to use parameters if they want to (and they do!).","shortMessageHtmlLink":"Switching HAL CTS to use TEST_F. (iree-org#17844)"}},{"before":"0e5474b5ee76c5f3c329aeaf10cc7b3116d8bd47","after":"8513e5fea1fdf521825603446e45ff5b4bb51384","ref":"refs/heads/main","pushedAt":"2024-07-09T22:01:10.000Z","pushType":"push","commitsCount":4,"pusher":{"login":"powderluv","name":null,"path":"/powderluv","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/74956?s=80&v=4"},"commit":{"message":"Resolving binding references when applying deferred command buffers. (#17840)\n\nThis uses the binding table passed in during application to translate\r\nany indirect buffer references into direct ones before passing them on\r\nto the replay target. This allows the replay target to assume all\r\nincoming buffer refs are direct.","shortMessageHtmlLink":"Resolving binding references when applying deferred command buffers. (i…"}},{"before":"94ecb8bde107ff49ba141cec0fb7468d8c450c76","after":"0e5474b5ee76c5f3c329aeaf10cc7b3116d8bd47","ref":"refs/heads/main","pushedAt":"2024-07-09T21:01:13.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"powderluv","name":null,"path":"/powderluv","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/74956?s=80&v=4"},"commit":{"message":"Add macOS runtime build using GitHub-hosted macos-14 runner. (#17835)\n\nHoping to get earlier signal on breaks in `hal/drivers/metal/` like\r\nhttps://github.com/iree-org/iree/pull/17730#issuecomment-2217594021 (see\r\nsample CI run with that error reported on this PR:\r\nhttps://github.com/iree-org/iree/actions/runs/9860754926/job/27227679469?pr=17835#step:6:551).\r\n\r\nUnfortunately, the GitHub-hosted runners don't support Metal, so we can\r\ncompile but not actually run those tests:\r\nhttps://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners/about-github-hosted-runners#limitations-for-arm64-macos-runners.\r\n\r\nci-exactly: build_test_runtime","shortMessageHtmlLink":"Add macOS runtime build using GitHub-hosted macos-14 runner. (iree-or…"}},{"before":"4d204eafed88de95fba63744c13ee738970ab1b7","after":"94ecb8bde107ff49ba141cec0fb7468d8c450c76","ref":"refs/heads/main","pushedAt":"2024-07-09T20:01:18.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"powderluv","name":null,"path":"/powderluv","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/74956?s=80&v=4"},"commit":{"message":"[NFC] Modify method for characterizing bit-extension operations to handle charecterization of bit-truncation as well. (#17833)\n\nCurrently the method for bit extension (a.k.a dequantization ops) only\r\nreturns a `bool`. Make this method return a richer handle, which can\r\nalso allow classification of bit-truncation operations. Also rename\r\n`isDequantization` method to `isBitExtend` operation.\r\n\r\nSigned-off-by: MaheshRavishankar ","shortMessageHtmlLink":"[NFC] Modify method for characterizing bit-extension operations to ha…"}},{"before":"0c90e5ee91c62999a03c11c9a0b978920304bc2d","after":"4d204eafed88de95fba63744c13ee738970ab1b7","ref":"refs/heads/main","pushedAt":"2024-07-09T18:01:27.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"powderluv","name":null,"path":"/powderluv","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/74956?s=80&v=4"},"commit":{"message":"[NFC] Rename `FusionOfTensorOps` to `FuseMultiUseElementwiseProducer`. (#17828)\n\nThis was a refactoring that was left out of previous refactoring of\r\nelementwise operation fusion. The `FusionOfTensorOps` pass was only\r\nhandling fusing elementwise operations where producer had multiple uses.\r\nRename the pass to make this clearer. Also split/rename the tests as\r\nwell to make the mapping of test to passes/pipeline clearer.\r\n\r\nSigned-off-by: MaheshRavishankar ","shortMessageHtmlLink":"[NFC] Rename FusionOfTensorOps to FuseMultiUseElementwiseProducer. ("}},{"before":"e24ea828178cbb1df055529bf40733ee53cedae9","after":"0c90e5ee91c62999a03c11c9a0b978920304bc2d","ref":"refs/heads/main","pushedAt":"2024-07-09T15:01:28.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"powderluv","name":null,"path":"/powderluv","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/74956?s=80&v=4"},"commit":{"message":"Fixing Metal build break.","shortMessageHtmlLink":"Fixing Metal build break."}},{"before":"3f6bf8c2e8f3c14a229d1a631e2f0f7e6b25cf15","after":"e24ea828178cbb1df055529bf40733ee53cedae9","ref":"refs/heads/main","pushedAt":"2024-07-09T04:01:13.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"powderluv","name":null,"path":"/powderluv","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/74956?s=80&v=4"},"commit":{"message":"[Flow][Global Opt] Fold unit dims of `stream.parameter.named` (#17824)\n\nSigned-off-by: Ian Wood ","shortMessageHtmlLink":"[Flow][Global Opt] Fold unit dims of stream.parameter.named (iree-o…"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAEfqH7DAA","startCursor":null,"endCursor":null}},"title":"Activity · nod-ai/SRT"}