Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slight fix on entropy test #4424

Closed
wants to merge 2 commits into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ def test_entropy_raises_value_error(self):
})
))
@testing.gpu
@testing.with_requires('scipy')
@testing.with_requires('scipy>=1.4.0')
class TestEntropy(unittest.TestCase):

def _entropy(self, xp, scp, dtype, shape, use_qk, base, axis, normalize):
Expand Down Expand Up @@ -124,11 +124,12 @@ def _entropy(self, xp, scp, dtype, shape, use_qk, base, axis, normalize):
return res

@testing.for_all_dtypes(no_complex=True)
@testing.numpy_cupy_allclose(rtol={cupy.float16: 1e-3,
cupy.float32: 1e-6,
@testing.numpy_cupy_allclose(rtol={cupy.float32: 1e-6,
'default': 1e-15},
scipy_name='scp')
def test_entropy(self, xp, scp, dtype):
# `entropy` returns float32 value for float16 input so float32
# tolerance is used for it.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are the intermediate computations done in float32?
If that is the case this is fine. But if all the steps are done in float16, even though the output is float32, there will be precision losses. So the original tolerance should be kept.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In SciPy, float16 inputs do not seem to be explicitly cast to float32, so the normalization step that divides by a sum over the array will use float16. Thus, the normalization step here should be more accurate than in SciPy. The later use of the scipy.special functions does result in things becoming float32, though (I don't think any functions in scipy.special support half precision)

return self._entropy(xp, scp, dtype, self.shape, self.use_qk,
self.base, self.axis, self.normalize)

Expand Down