Skip to content

numpy.tanh gives different results on Windows and Linux #9187

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
tadeu opened this issue May 30, 2017 · 5 comments
Closed

numpy.tanh gives different results on Windows and Linux #9187

tadeu opened this issue May 30, 2017 · 5 comments

Comments

@tadeu
Copy link

tadeu commented May 30, 2017

Here is one example: the result for tanh(-0.35898351519709742) is -0.34431837261747228 on Linux and -0.34431837261747222 on Windows. Windows have the most approximate result.

@eric-wieser
Copy link
Member

eric-wieser commented May 30, 2017

What does np.float64(math.tanh(-0.35898351519709742)) give on your linux machine?

@njsmith
Copy link
Member

njsmith commented May 30, 2017

In general, numpy doesn't attempt to provide exactly the same results on different platforms. It's essentially impossible for floating point code. Even two different builds with the same compiler can give slightly different results depending on optimization level. In this case we're using the platform-provided tanh, so it's not even the same code doing the calculation.

One thing we can do if some platform's tanh is particularly inaccurate is to blacklist the platform-provided version in favor of our own code. I can't tell if that's the case here, though – just how far off are the Linux and windows results from the true answer? Which one is more accurate? What version of python, windows, and Linux are you using?

@tadeu
Copy link
Author

tadeu commented May 31, 2017

@eric-wieser

>>> np.float64(math.tanh(-0.35898351519709742))
-0.34431837261747228

@njsmith Ok, got it that it's caused by the platform-provided functions. The results are usually far off by one ULP. Got some differences in the np.exp function too, e.g., np.exp(99.06494938359764).

I'm solving this by using numexpr, which uses MKL functions (VML), and MKL seems to have the same implementation for all platforms.

Just to let you know that perhaps it would be nice if numpy used MKL for these functions too.

@tadeu
Copy link
Author

tadeu commented May 31, 2017

Just a curiosity: in these examples, for tanh, the Windows version is more accurate, and for exp, the Linux version is more accurate.

@njsmith
Copy link
Member

njsmith commented Jun 1, 2017

1 ulp differences are definitely expected and ordinary. Using MKL is not a general solution either; e.g., even for trivial floating point calculations like a + b + c you can get different results depending on platforms (Windows uses different rounding rules than Linux/MacOS in some cases) and compiler settings. If you really, really need floating point determinism then it's possible but it generally requires you that every piece of code you use is written with this in mind, and that you carefully control your entire technology stack including exactly which compilers and target platforms you care about. The only folks I know who routinely do this is are game developers who need networked engines to remain in lockstep and otherwise can meet those critera (ref1, ref2).

Usually the best (and certainly most common) solution is to write your code so that it doesn't care about 1 ulp differences.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants