Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Math.Cos() precision #8528

Closed
FrancoisBeaune opened this issue Jul 11, 2017 · 5 comments
Closed

Math.Cos() precision #8528

FrancoisBeaune opened this issue Jul 11, 2017 · 5 comments

Comments

@FrancoisBeaune
Copy link

The following C# program computes the cosine of 4203708359 (which can be exactly represented in double precision):

class Program
{
    static void Main()
    {
        var x = (double)4203708359;
        var c = System.Math.Cos(x);
        System.Console.WriteLine(c);
    }
}

Regardless of the platform target (Any CPU, x86, x64), in both Debug and Release, this program outputs the following value:

-0.579777545194404

Here is the equivalent C program compiled with Visual Studio 2017:

#include <math.h>
#include <stdio.h>

int main()
{
    double x = (double)4203708359ULL;
    double c = cos(x);
    printf("%.15f\n", c);
    return 0;
}

and its output value (x64, Debug or Release):

-0.579777545198813

Finally, here is the value computed by Mathematica to 50 decimals:

In[1] := N[Cos[4203708359], 50]
Out[1] := -0.57977754519881338078846707027800171954257546099993

Questions:

  • Why is the C# version accurate to only 11 decimals?
  • Where is Math.Cos() actually implemented? All I could find so far is its declaration in mscorlib.
@mattwarren
Copy link
Contributor

Where is Math.Cos() actually implemented? All I could find so far is its declaration in mscorlib.

Ah, the joys of following FCall and Intrinsic method calls/implementations through the CoreCLR source, welcome to the club!!

I think it's wired up via this code to the implementation here. Certainly this comment supports that:

// Sin, Cos, and Tan on AMD64 Windows were previously implemented in vm\amd64\JitHelpers_Fast.asm
// by calling x87 floating point code (fsin, fcos, fptan) because the CRT helpers were too slow. This
// is no longer the case and the CRT call is used on all platforms.

However, this might only apply for AMD64, it's an 'intrinsic' so the JIT might do something different on other platforms

@mikedn
Copy link
Contributor

mikedn commented Jul 11, 2017

The following C# program computes the cosine of 4203708359 (which can be exactly represented in double precision):

When running using which runtime? The current .NET Core build produces -0.579777545198813 on x86 and -0.579777545194404 on x86.

and its output value (x64, Debug or Release):

and VC++ 2017's output value on x86 is -0.579777545194404.

It all comes down to whether the generated code is using a "proper" cosine calculation function or the x87 fcos instruction which is not very accurate.

@FrancoisBeaune
Copy link
Author

When running using which runtime?

Sorry I forgot to mention, I'm using .NET Framework 4.6.2.

VC++ 2017's output value on x86 is -0.579777545194404.

Good point, I somehow missed that.

It all comes down to whether the generated code is using a "proper" cosine calculation function or the x87 fcos instruction which is not very accurate.

That was indeed my suspicion.

Alright so I think the case is closed, I've got answers to all my questions, thanks for the quick replies!

@mikedn
Copy link
Contributor

mikedn commented Jul 11, 2017

Sorry I forgot to mention, I'm using .NET Framework 4.6.2.

Note that .NET Framework "insists" in using fcos, even on x64 (where x87 instructions are not normally used). .NET Core just calls C's cos function and so it inherits its (good or bad) behavior.

@tannergooding
Copy link
Member

Sorry I forgot to mention, I'm using .NET Framework 4.6.2.

As @mikedn said, netfx insists on using fcos (which has its own issues). x64 uses it because of a long since closed bug in MSVCRT that caused the CRT implementation to be significantly slower. x86 uses it because the legacy JIT didn't require SSE/SSE2 instructions, so they were not necessarily available for use in the CRT functions. The System.Math (and now System.MathF) functions (on .NET Core) all use the CRT implementations which are generally faster and more precise than the x87 intrinsics.

The math functions, in the vast majority of applications, prefer speed over precision and that was the default the System.Math functions were configured for. Indeed, there have been several bugs tracking the poor performance of System.Math on Linux which (to my knowledge) only provides a precise mode for these functions (although GCC/Clang have several issues which track adding a fast mode for these intrinsics). One bug that tracks these is: https://github.com/dotnet/coreclr/issues/9373, where some functions show as much as a 200% regression.

If you want/need always more accurate results, you will currently need to write your own implementation.

That being said, it might be worth logging a bug to see if a JIT configuration option can be added so users can opt-in-to precise vs fast mode for the Math intrinsics (CRT has this and it seems a reasonable request to have for managed code as well). However, because of the way FCALLs work, I don't think this type of feature will be easy to implement in any case.

@msftgits msftgits transferred this issue from dotnet/coreclr Jan 31, 2020
@ghost ghost locked as resolved and limited conversation to collaborators Dec 21, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants