Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CodeStyle][F811] fix some test cases shadowed by the same name #48745

Merged
merged 14 commits into from
Dec 8, 2022
6 changes: 0 additions & 6 deletions .flake8
Original file line number Diff line number Diff line change
Expand Up @@ -37,9 +37,3 @@ per-file-ignores =
.cmake-format.py: F821
python/paddle/fluid/tests/unittests/dygraph_to_static/test_loop.py: F821
python/paddle/fluid/tests/unittests/dygraph_to_static/test_closure_analysis.py: F821
# These files will be fixed in the future
python/paddle/fluid/tests/unittests/fft/test_fft_with_static_graph.py: F811
python/paddle/fluid/tests/unittests/test_activation_nn_grad.py: F811
python/paddle/fluid/tests/unittests/test_lstm_cudnn_op.py: F811
python/paddle/fluid/tests/unittests/test_matmul_v2_op.py: F811
python/paddle/fluid/tests/unittests/test_rrelu_op.py: F811
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ def test_dygraph_group_sharded(self):
self.run_mnist_2gpu('dygraph_group_sharded_api_eager.py')

# check stage3 for some functions.
def test_dygraph_group_sharded(self):
def test_dygraph_group_sharded_stage3(self):
self.run_mnist_2gpu('dygraph_group_sharded_stage3_eager.py')


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -266,14 +266,6 @@ def test_static_fftn(self):
@parameterize(
(TEST_CASE_NAME, 'x', 'n', 'axis', 'norm', 'expect_exception'),
[
(
'test_x_complex',
rand_x(4, complex=True),
None,
None,
'backward',
TypeError,
),
(
'test_n_nagative',
rand_x(4),
Expand All @@ -295,11 +287,11 @@ def test_static_fftn(self):
('test_norm_not_in_enum', rand_x(2), None, -1, 'random', ValueError),
],
)
class TestRfftnException(unittest.TestCase):
def test_static_rfftn(self):
class TestFftnException(unittest.TestCase):
def test_static_fftn(self):
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

下面 1522 行的单测与这里完全相同(包括代码 + 测试用例),根据位置推测这里要测试的是 fftn,故修改为 TestFftnException,另外由于 fftnrfftn 的区别是 fftn 同时支持实数与复数,而 rfftn 仅支持实数,所以 rfftn 里的第一个异常测试用例 TypeError 对于 fftn 是不会抛出的,因此移除

with self.assertRaises(self.expect_exception):
with stgraph(
paddle.fft.rfftn,
paddle.fft.fftn,
self.place,
self.x,
self.n,
Expand Down
30 changes: 0 additions & 30 deletions python/paddle/fluid/tests/unittests/test_activation_nn_grad.py
Original file line number Diff line number Diff line change
Expand Up @@ -407,36 +407,6 @@ def test_grad(self):
self.func(p)


class TestAbsDoubleGradCheck(unittest.TestCase):
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

148 行有一个相同的单测,两者不是为了不同测试用例而设的,而是两次不同的 PR 分别添加的,因此只需要保留一个即可,这里保留 148 行处较新的

@prog_scope()
def func(self, place):
# the shape of input variable should be clearly specified, not inlcude -1.
shape = [2, 3, 7, 9]
eps = 1e-6
dtype = np.float64

x = layers.data('x', shape, False, dtype)
x.persistable = True
y = paddle.abs(x)
x_arr = np.random.uniform(-1, 1, shape).astype(dtype)
# Because we set delta = 0.005 in calculating numeric gradient,
# if x is too small, the numeric gradient is inaccurate.
# we should avoid this
x_arr[np.abs(x_arr) < 0.005] = 0.02

gradient_checker.double_grad_check(
[x], y, x_init=x_arr, place=place, eps=eps
)

def test_grad(self):
paddle.enable_static()
places = [fluid.CPUPlace()]
if core.is_compiled_with_cuda():
places.append(fluid.CUDAPlace(0))
for p in places:
self.func(p)


class TestLogDoubleGradCheck(unittest.TestCase):
def log_wrapper(self, x):
return paddle.log(x[0])
Expand Down
2 changes: 1 addition & 1 deletion python/paddle/fluid/tests/unittests/test_lstm_cudnn_op.py
Original file line number Diff line number Diff line change
Expand Up @@ -584,7 +584,7 @@ def test_lstm(self):
@unittest.skipIf(
not core.is_compiled_with_cuda(), "core is not compiled with CUDA"
)
class TestCUDNNlstmAPI(unittest.TestCase):
class TestCUDNNlstmAPI(unittest.TestCase): # noqa: F811
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里和上面 545 行是针对 is_test 为 True 和 False 的两个不同测试用例,但两个分别改名后发现上面的单测是有问题的,因为这里是针对即将要移除的 fluid.layers.lstm 的 API 测试(非 OP 测试),因此这里就暂时 ignore 了,之后清理 fluid 时候应该就被一起清理了

def test_lstm(self):
seq_len = 20
batch_size = 5
Expand Down
2 changes: 1 addition & 1 deletion python/paddle/fluid/tests/unittests/test_matmul_v2_op.py
Original file line number Diff line number Diff line change
Expand Up @@ -732,7 +732,7 @@ def func_dygraph_matmul(self):

paddle.enable_static()

def func_dygraph_matmul(self):
def func_dygraph_matmul(self): # noqa: F811
with _test_eager_guard():
self.func_dygraph_matmul()

Expand Down
19 changes: 5 additions & 14 deletions python/paddle/fluid/tests/unittests/test_rrelu_op.py
Original file line number Diff line number Diff line change
Expand Up @@ -317,9 +317,9 @@ def setUp(self):
self.lower = 0.1
self.upper = 0.3
self.is_test = True
self.init_prams()
self.init_params()

def init_prams(self):
def init_params(self):
self.dtype = "float64"
self.x_shape = [2, 3, 4, 5]

Expand All @@ -343,22 +343,13 @@ def test_check_grad(self):
self.check_grad(['X'], 'Out')


class RReluTrainingTest(OpTest):
class RReluTrainingTest(RReluTest):
def setUp(self):
self.op_type = "rrelu"
self.lower = 0.3
self.upper = 0.3000009
self.upper = 0.300000009
self.is_test = False
self.init_prams()
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里和下面完全重复,因此只保留一个,另外因为仅仅继承了 OpTest 而没有继承 RReluTest 而没有任何测试被执行,因此修改为继承 RReluTest

修改后因为 0.30000090.3 差异太大单测过不了所以加了俩 0(这里猜测为了避免 is_test=False 即训练时的随机行为的影响,将 lower 和 upper 设为接近的值,但差异还是稍微大了点,因此单测过不了)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

可以麻烦 @thunder95 review 下这里这个单测嘛?看我是否理解错了或者这里删除的完全相同的另一个是否是一个不同的测试用例?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@SigureMo 大佬理解正确的,这里因为当时不好控制随机的影响,所以用了两个比较接近的bound来进行测试。

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM @SigureMo



class RReluTrainingTest(OpTest):
def setUp(self):
self.op_type = "rrelu"
self.lower = 0.3
self.upper = 0.3000009
self.is_test = False
self.init_prams()
self.init_params()


if __name__ == "__main__":
Expand Down
1 change: 0 additions & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,6 @@
from setuptools.command.egg_info import egg_info
from setuptools.command.install import install as InstallCommandBase
from setuptools.command.install_lib import install_lib
from setuptools.dist import Distribution

if sys.version_info < (3, 7):
raise RuntimeError(
Expand Down