Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Tracking][Contrib] Known failing unit tests #8901

Open
Lunderberg opened this issue Sep 1, 2021 · 2 comments
Open

[Tracking][Contrib] Known failing unit tests #8901

Lunderberg opened this issue Sep 1, 2021 · 2 comments
Labels
frontend:coreml python/tvm/relay/frontend/coreml.py

Comments

@Lunderberg
Copy link
Contributor

Lunderberg commented Sep 1, 2021

Summary

Some unit tests were unintentionally disabled in CI, and so regressions weren't been caught. These tests didn't run on the ci_cpu image, because they lacked either GPU hardware or python packages required to run. They didn't run on the ci_gpu image, because they weren't marked with tvm.testing.uses_gpu. PR #8902 allows the tests to run, and marks tests with regressions as expected failures. These expected failures should be resolved to restore full functionality.

Status

File Unit test Status Owner PR
test_tensorrt.py test_dynamic_reshape DONE #11203
test_tensorrt.py test_alexnet DONE #11203
test_tensorrt.py test_resnet18_v1 DONE #11203
test_tensorrt.py test_resnet18_v2 DONE #11203
test_tensorrt.py test_squeezenet DONE #11203
test_tensorrt.py test_mobilenet DONE #11203
test_tensorrt.py test_mobilenet_v2 DONE #11203
test_tensorrt.py test_vgg11 DONE #11203
test_tensorrt.py test_densenet121 DONE #11203
test_tensorrt.py test_dynamic_offload DONE #11203
test_coreml_codegen.py test_annotate TODO
Lunderberg added a commit to Lunderberg/tvm that referenced this issue Sep 1, 2021
…cuda

Previously, the tests had an early bailout if tensorrt was disabled,
or if there was no cuda device present.  However, the tests were not
marked with `pytest.mark.gpu` and so they didn't run during
`task_python_integration_gpuonly.sh`.  This commit adds the
`requires_cuda` mark, and maintains the same behavior of testing the
tensorrt compilation steps if compilation is enabled, and running the
results if tensorrt is enabled.

In addition, some of the tests result in failures when run.  These
have been marked with `pytest.mark.xfail`, and are being tracked in
issue apache#8901.
leandron pushed a commit that referenced this issue Sep 2, 2021
* [UnitTests][CoreML] Marked test_annotate as a known failure.

The unit tests in `test_coreml_codegen.py` haven't run in the CI
lately, so this test wasn't caught before.  (See tracking issue

- Added `pytest.mark.xfail` mark to `test_annotate`.

- Added `tvm.testing.requires_package` decorator, which can mark tests
  as requiring a specific python package to be available.  Switched
  from `pytest.importorskip('coremltools')` to
  `requires_package('coremltools')` in `test_coreml_codegen.py` so
  that all tests would explicitly show up as skipped in the report.

- Added `uses_gpu` tag to all tests in `test_coreml_codegen.py`, since
  only ci_gpu has coremltools installed.  In the future, if the ci_cpu
  image has coremltools installed, this mark can be removed.

* [Pytest][TensorRT] Mark the TensorRT tests with tvm.testing.requires_cuda

Previously, the tests had an early bailout if tensorrt was disabled,
or if there was no cuda device present.  However, the tests were not
marked with `pytest.mark.gpu` and so they didn't run during
`task_python_integration_gpuonly.sh`.  This commit adds the
`requires_cuda` mark, and maintains the same behavior of testing the
tensorrt compilation steps if compilation is enabled, and running the
results if tensorrt is enabled.

In addition, some of the tests result in failures when run.  These
have been marked with `pytest.mark.xfail`, and are being tracked in
issue #8901.
ylc pushed a commit to ylc/tvm that referenced this issue Sep 29, 2021
…e#8902)

* [UnitTests][CoreML] Marked test_annotate as a known failure.

The unit tests in `test_coreml_codegen.py` haven't run in the CI
lately, so this test wasn't caught before.  (See tracking issue

- Added `pytest.mark.xfail` mark to `test_annotate`.

- Added `tvm.testing.requires_package` decorator, which can mark tests
  as requiring a specific python package to be available.  Switched
  from `pytest.importorskip('coremltools')` to
  `requires_package('coremltools')` in `test_coreml_codegen.py` so
  that all tests would explicitly show up as skipped in the report.

- Added `uses_gpu` tag to all tests in `test_coreml_codegen.py`, since
  only ci_gpu has coremltools installed.  In the future, if the ci_cpu
  image has coremltools installed, this mark can be removed.

* [Pytest][TensorRT] Mark the TensorRT tests with tvm.testing.requires_cuda

Previously, the tests had an early bailout if tensorrt was disabled,
or if there was no cuda device present.  However, the tests were not
marked with `pytest.mark.gpu` and so they didn't run during
`task_python_integration_gpuonly.sh`.  This commit adds the
`requires_cuda` mark, and maintains the same behavior of testing the
tensorrt compilation steps if compilation is enabled, and running the
results if tensorrt is enabled.

In addition, some of the tests result in failures when run.  These
have been marked with `pytest.mark.xfail`, and are being tracked in
issue apache#8901.
ylc pushed a commit to ylc/tvm that referenced this issue Jan 13, 2022
…e#8902)

* [UnitTests][CoreML] Marked test_annotate as a known failure.

The unit tests in `test_coreml_codegen.py` haven't run in the CI
lately, so this test wasn't caught before.  (See tracking issue

- Added `pytest.mark.xfail` mark to `test_annotate`.

- Added `tvm.testing.requires_package` decorator, which can mark tests
  as requiring a specific python package to be available.  Switched
  from `pytest.importorskip('coremltools')` to
  `requires_package('coremltools')` in `test_coreml_codegen.py` so
  that all tests would explicitly show up as skipped in the report.

- Added `uses_gpu` tag to all tests in `test_coreml_codegen.py`, since
  only ci_gpu has coremltools installed.  In the future, if the ci_cpu
  image has coremltools installed, this mark can be removed.

* [Pytest][TensorRT] Mark the TensorRT tests with tvm.testing.requires_cuda

Previously, the tests had an early bailout if tensorrt was disabled,
or if there was no cuda device present.  However, the tests were not
marked with `pytest.mark.gpu` and so they didn't run during
`task_python_integration_gpuonly.sh`.  This commit adds the
`requires_cuda` mark, and maintains the same behavior of testing the
tensorrt compilation steps if compilation is enabled, and running the
results if tensorrt is enabled.

In addition, some of the tests result in failures when run.  These
have been marked with `pytest.mark.xfail`, and are being tracked in
issue apache#8901.
@mikepapadim
Copy link
Contributor

mikepapadim commented Mar 8, 2022

@Lunderberg I am tracking down the TRT BYOC and I figure out the issue with these tests. It is the mxnet importer plus a guard ICHECK on the TRT builder. I am having a small patch fixing it, but it also switches to import these models in onnx format directly. The only issue is that to download the models in onnx format from GitHub during CI requires Git LFS download instead.

@mbs-octoml
Copy link
Contributor

mbs-octoml commented Jun 15, 2022

All the test_tensorrt.py tests are running except two which are captured in #11765.

Note CI never exercises with run_module=True, that aspect is also captured in #11765.

@areusch areusch added the needs-triage PRs or issues that need to be investigated by maintainers to find the right assignees to address it label Oct 19, 2022
@Lunderberg Lunderberg added frontend:coreml python/tvm/relay/frontend/coreml.py and removed needs-triage PRs or issues that need to be investigated by maintainers to find the right assignees to address it labels Nov 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
frontend:coreml python/tvm/relay/frontend/coreml.py
Projects
None yet
Development

No branches or pull requests

4 participants