-
Notifications
You must be signed in to change notification settings - Fork 7.2k
Description
Currently, most tests in test/test_transforms_tensor.py rely on unittest.TestCase
. Now that we support pytest
, we want to remove the use of the unittest
module.
Instructions
There are many tests in this file, so I bundled them in multiple related groups below. If you're interested in working on this issue, please comment below with "I'm working on <group X>
, <group Y>
, etc..." so that others don't pick the same tests as you do. Feel free to pick as many groups as you wish, but please don't submit more than 2 groups per PR in order to keep the reviews manageable. Before picking a group, make sure it wasn't picked by another contributor first. Thanks!!
How to port a test to pytest
Porting a test from unittest
to pytest is usually fairly straightforward. For a typical example, see https://github.com/pytorch/vision/pull/3907/files:
- take the test method out of the
Tester(unittest.TestCase)
class and just declare it as a function - Replace
@unittest.skipIf
withpytest.mark.skipif(cond, reason=...)
- remove any use of
self.assertXYZ
.- Typically
assertEqual(a, b)
can be replaced byassert a == b
when a and b are pure python objects (scalars, tuples, lists), and otherwise we can rely onassert_equal
which is already used in the file. self.assertRaises
should be replaced with thepytest.raises(Exp, match=...):
context manager, as done in https://github.com/pytorch/vision/pull/3907/files. Same for warnings withpytest.warns
self.assertTrue
should be replaced with a plainassert
- Typically
- When a function uses for loops to tests multiple parameter values, one should use
pytest.mark.parametrize
instead, as done e.g. in https://github.com/pytorch/vision/pull/3907/files. - It may make sense to keep related tests within a single class. Not all groups need a dedicated class though, it's on a case-by-case basis.
- Important: a lot of these tests rely on
self.device
because they need to be run on both CPU and GPU. For these, use thecpu_and_gpu()
fromcommon_utils
instead, e.g.:
vision/test/test_functional_tensor.py
Lines 845 to 846 in f7b4cb0
@pytest.mark.parametrize('device', cpu_and_gpu()) | |
def test_resize_asserts(device): |
and you can just replace self.device
by device
in the test
- The tests that only need a CPU should use the
cpu_only
decorator fromcommon_utils
, and the tests that need a cuda device should use theneeds_cuda
decorator (unless they already usecpu_and_gpu()
).
-
group A Porting to pytest #3996
These ones could be bundled into a singletest_random()
function and be parametrized over
func
,method
,fn_kwargs
andmatch_kwargs
.test_random_horizontal_flip
test_random_vertical_flip
test_random_invert
test_random_posterize
test_random_solarize
test_random_adjust_sharpness
test_random_autocontrast
test_random_equalize
-
group B [pytest port] color_jitter, pad, crop, center_crop in test_transforms_tensor #4008
test_color_jitter
-- make 5 new test functions and parametrize 4 of them (over brightness, contrast, saturation, hue)test_pad
-- parametrize over m and multest_crop
-- can be split and parametrized over different things, or not.test_center_crop
-- same
-
group C [pytest port] test{crop_five, crop_ten, resize, resized_crop} in test_transforms_tensor #4010
These 2 below can probably be merged into_test_op_list_output
(which should be renamed to e.g. test_x_crop)
by parametrizing over func, method, out_length and fn_kwargstest_five_crop
test_ten_crop
test_resize
-- parametrize over all for loop variablestest_resized_crop
-- same
-
group D Port random affine, rotate, perspective and to_grayscale to pytest #4000
test_random_affine
-- split this into different parametrized functions (one for shear, one for scale, etc.)test_random_rotate
-- parametrize over all loop variablestest_random_perspective
-- parametrize over all loop variablestest_to_grayscale
-- parametrize over meth_kwargs
-
group E Port normalize, linear_transformation, compose, random_apply, gaussian_blur to pytest #4023
test_normalize
test_linear_transformation
test_compose
test_random_apply
test_gaussian_blur
-- parametrize over meth_kwargs
-
group F Porting to pytest #3996
test_random_erasing
-- maybe split this one into different functions, and parametrize over test_configstest_convert_image_dtype
-- parametrize over all loop variables and convert thecontinue
and the assertRaises into apytest.xfail
test_autoaugment
-- parametrize over policy and fill