-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CodeStyle][F811] fix some test cases shadowed by the same name #48745
Changes from all commits
82a12c2
74eaa9f
06454df
ddd707a
5c1c8c3
767f01f
265596e
927c03a
8bfd20c
354d6c2
9f88132
ba7dc7e
6853bad
eb981d8
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -407,36 +407,6 @@ def test_grad(self): | |
self.func(p) | ||
|
||
|
||
class TestAbsDoubleGradCheck(unittest.TestCase): | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 148 行有一个相同的单测,两者不是为了不同测试用例而设的,而是两次不同的 PR 分别添加的,因此只需要保留一个即可,这里保留 148 行处较新的 |
||
@prog_scope() | ||
def func(self, place): | ||
# the shape of input variable should be clearly specified, not inlcude -1. | ||
shape = [2, 3, 7, 9] | ||
eps = 1e-6 | ||
dtype = np.float64 | ||
|
||
x = layers.data('x', shape, False, dtype) | ||
x.persistable = True | ||
y = paddle.abs(x) | ||
x_arr = np.random.uniform(-1, 1, shape).astype(dtype) | ||
# Because we set delta = 0.005 in calculating numeric gradient, | ||
# if x is too small, the numeric gradient is inaccurate. | ||
# we should avoid this | ||
x_arr[np.abs(x_arr) < 0.005] = 0.02 | ||
|
||
gradient_checker.double_grad_check( | ||
[x], y, x_init=x_arr, place=place, eps=eps | ||
) | ||
|
||
def test_grad(self): | ||
paddle.enable_static() | ||
places = [fluid.CPUPlace()] | ||
if core.is_compiled_with_cuda(): | ||
places.append(fluid.CUDAPlace(0)) | ||
for p in places: | ||
self.func(p) | ||
|
||
|
||
class TestLogDoubleGradCheck(unittest.TestCase): | ||
def log_wrapper(self, x): | ||
return paddle.log(x[0]) | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -584,7 +584,7 @@ def test_lstm(self): | |
@unittest.skipIf( | ||
not core.is_compiled_with_cuda(), "core is not compiled with CUDA" | ||
) | ||
class TestCUDNNlstmAPI(unittest.TestCase): | ||
class TestCUDNNlstmAPI(unittest.TestCase): # noqa: F811 | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 这里和上面 545 行是针对 |
||
def test_lstm(self): | ||
seq_len = 20 | ||
batch_size = 5 | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -317,9 +317,9 @@ def setUp(self): | |
self.lower = 0.1 | ||
self.upper = 0.3 | ||
self.is_test = True | ||
self.init_prams() | ||
self.init_params() | ||
|
||
def init_prams(self): | ||
def init_params(self): | ||
self.dtype = "float64" | ||
self.x_shape = [2, 3, 4, 5] | ||
|
||
|
@@ -343,22 +343,13 @@ def test_check_grad(self): | |
self.check_grad(['X'], 'Out') | ||
|
||
|
||
class RReluTrainingTest(OpTest): | ||
class RReluTrainingTest(RReluTest): | ||
def setUp(self): | ||
self.op_type = "rrelu" | ||
self.lower = 0.3 | ||
self.upper = 0.3000009 | ||
self.upper = 0.300000009 | ||
self.is_test = False | ||
self.init_prams() | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 这里和下面完全重复,因此只保留一个,另外因为仅仅继承了 修改后因为 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 可以麻烦 @thunder95 review 下这里这个单测嘛?看我是否理解错了或者这里删除的完全相同的另一个是否是一个不同的测试用例? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @SigureMo 大佬理解正确的,这里因为当时不好控制随机的影响,所以用了两个比较接近的bound来进行测试。 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. LGTM @SigureMo |
||
|
||
|
||
class RReluTrainingTest(OpTest): | ||
def setUp(self): | ||
self.op_type = "rrelu" | ||
self.lower = 0.3 | ||
self.upper = 0.3000009 | ||
self.is_test = False | ||
self.init_prams() | ||
self.init_params() | ||
|
||
|
||
if __name__ == "__main__": | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
下面 1522 行的单测与这里完全相同(包括代码 + 测试用例),根据位置推测这里要测试的是
fftn
,故修改为TestFftnException
,另外由于fftn
与rfftn
的区别是fftn
同时支持实数与复数,而rfftn
仅支持实数,所以rfftn
里的第一个异常测试用例TypeError
对于fftn
是不会抛出的,因此移除