[CI] Determine which gpu tests, if any, can be parallelized, and strategy to do so #8675
Labels
needs-triage
PRs or issues that need to be investigated by maintainers to find the right assignees to address it
#8576 is enabling pytest-xdist for CPU-targeted TVM tests. This is a good first step to parallelizing the TVM test suite, but the real benefits will come if we can find a way to do this for tests that target GPUs. It's not clear if this can be done.
I think:
tvm.testing
decorators to formally identify these in codeKnown unknowns that could break this approach:
Feel free to generate actionable child issues as a result of this. If things are not actionable and require deliberation, it would be great to post up to discuss.tvm.ai instead--we like to keep issues in GH to only those with clear steps to resolve.
@mikepapadim to take a look at this
cc @Mousius @denise-k @driazati @gigiblender @jroesch @leandron
The text was updated successfully, but these errors were encountered: