Skip to content
This repository has been archived by the owner on Jan 23, 2023. It is now read-only.

Improved Asin and ACos #16484

Merged
merged 4 commits into from Mar 15, 2017
Merged

Improved Asin and ACos #16484

merged 4 commits into from Mar 15, 2017

Conversation

dcwuser
Copy link
Contributor

@dcwuser dcwuser commented Feb 26, 2017

Previous implementations of Asin and Acos were based on log(i z + sqrt(1 - z^2)), which is subject to overflow and cancellation in large fractions of the space of representable z. The new implementation is based on an article by Hull and Tang that correctly handles essentially the entire representable space. This solves bug #15455, which can be closed after this pull.

Fixes #15455

@karelz
Copy link
Member

karelz commented Feb 27, 2017

@dcwuser FYI: if you add text "Fixes #15455" or Closes, etc. into your description, GitHub will automatically close the issue upon merge.

@mellinoe
Copy link
Contributor

Hey @dcwuser , thanks again for the contribution. I will most likely not have time to review this in-depth until tomorrow. I suspect it will take me some time to refresh my knowledge, analyze the changes and convince myself it is still correct, as with the previous changes.

@tannergooding
Copy link
Member

I think it is also worth perf checking this. I had attempted to implement several of the Math functions in C# as a workaround for https://github.com/dotnet/coreclr/issues/9373 and the perf wasn't very good (I'll see if I can find the numbers I had somewhere).

@dcwuser
Copy link
Contributor Author

dcwuser commented Feb 27, 2017

@tannergooding: Agreed, a perf comparison is valuable. Here are timings on my machine for three runs of computing ASin and ACos of 10^6 random complex numbers with real and imaginary parts between 10^-4 and 10^4 from all quadrants, with release builds.

  • old code: 1563 +/- 94 ms
  • new code: 684 +/- 8 ms

So about a factor of two improvement.

@tannergooding
Copy link
Member

So comparing this implementation to the Windows CRT implementation I can make the following comments:

  • You need to properly handle NaN and Infinities. Currently we are doing a whole lot of computations for both cases. This line https://github.com/dotnet/corefx/pull/16484/files, will fail for NaN and PositiveInfinity. We then run at least 8 math calls (calls to some System.Math function) and a number of simple operations (addition, subtraction, multiplication, division, etc) to compute an output that is the same as the input (and for back compat needs to continue returning the same NaN value as was passed in).
  • You are not handling denormal and small arguments, so the results may be off for these cases.

I'm also a bit confused as to how you are getting a factor of two improvement for your perf numbers. The new implementation has significantly more calls, comparisons, and operations.

Previously, the worst case was 3 comparisons, 6 simple operations, and 2 math calls. The best case was 2 comparisons, 2 simple operations, and 1 math call.

In the new implementation, the best case is 6 comparisons, 8 simple operations, and 4 math calls. That doesn't even account for the fact that the new implemenation has more expensive simple operations (such as division) and that it makes more expensive math calls (Log is around twice as expensive as Sqrt, on a range of machines).

I am going to try running this on a range of machines later tonight and will share the numbers and benchmark code.

@dcwuser
Copy link
Contributor Author

dcwuser commented Feb 28, 2017

@tannergooding:

Regarding perf: You are mis-counting operations, because complex operations are significantly more costly than real operations. The expression
i * Log(i * z + Sqrt(1 - z * z))
which appeared in the old code, requires 3 * 6 + 2 * 2 = 22 flops to evaluate just the arithmetic operations, and that's before the calls to complex Log and Sqrt, which are also significantly more expensive than the calls to the their real counterparts in the the new code. This is the reason that, nearly always, careful implementations of complex functions are constructed in terms of real operations using their real and imaginary components rather than in terms of complex operations. By all means, though, please do run and report your own perf comparison.

Regarding NaNs and Infinities: These inputs are very unusual; I disagee that it is worthwhile to (even slightly) slow down much more common cases by testing for them. For example, if the upfront test comparison for a particular input slows down the overall algorithm by 1%, then it should only be implemented if that inputs occurs in more than 1 case in 100. My guess is that NaN and infinity occur in more like 1 input in 100,000. Even that analysis is a bit over-generous, since once it produces a NaN, the user's calculation is most likely useless anyway, so it's much less important how long it takes.

What I like to do is write an algorithm totally ignoring NaNs and infinities. If that algorithm then produces correct results for NaNs and infinities, I leave it alone. If it doesn't, I put the required tests in the most obscure and unlikely branch I can get away with. Basically, I think you should never worry about fast processing for NaNs, and almost never for inifinities. Of course, if you see a place where a NaN or infinity input is giving the wrong output, as opposed to just taking longer than is strictly necessary, then by all means let me know.

Regarding small inputs: The new code handles very small real parts, even denormal real parts, just fine. It's true that, for imaginary parts less than about 1.0E-160, it effectively sets them to zero by squaring them. For example, it returns 0.0 for Asin(I * 1.0E-200) instead of I * 1.0E-200. The old code effectively sets to zero any imaginary part smaller than about 1.0E-16 by adding it to 1.0. So this is still a vast improvement over the old code. Still, I will look in to handling the case of very small z.Im.

@tannergooding
Copy link
Member

@dcwuser, you're right, I was miscomparing the operations there. I am seeing a 37% improvement for the new implementation of cacos.

As for the handling of NaN and Infinity, I agree that the inputs are unusual, but there is existing handling/behavior for these input values and it needs to be preserved for back-compat. That is to say, the bit-value of the NaN passed in needs to be preserved.

The effective handling is currently:

if value.Real is NaN
  result.Real = value.Real
  if value.Imaginary is NaN
    result.Imaginary = value.Imaginary
  else if value.Imaginary is Infinity
    result.Imaginary = -value.Imaginary
  else
    result.Imaginary = value.Real
else if value.Imaginary is NaN
  if value.Real is Infinity
    result.Real = value.Imaginary
    result.Imaginary = PositiveInfinity
  else if value.Real is Zero
    result.Real = PI / 2
    result.Imaginary = imaginary
  else
    result.Real = imaginary
    result.Imaginary = imaginary
else if value.Real is Infinity
  if value.Imaginary is Infinity
    if value.Real is NegativeInfinity
      result.Real = PI * 3 / 4
    else
      result.Real = PI / 4
  else if value.Real is NegativeInfinity
    result.Real = PI
  else
    result.Real = PositiveZero
  if value.Imaginary is < Zero
    result.Imaginary = PositiveInfinity
  else
    result.Imaginary = NegativeInfinity
else if value.Imaginary is Infinity
  result.Real = PI / 2
  result.Imaginary = -value.Imaginary
else
  compute proper result

This can all be shoved behind a check of if (IsNaNOrInfinity(value.Real) || IsNaNOrInfinity(value.Imaginary)) where the check is implemented as:

bool IsNaNOrInfinity(double value) {
    return (*(long*)(&d) & 0x7FFFFFFFFFFFFFFF) >= 0x7FF0000000000000;
}

@dcwuser
Copy link
Contributor Author

dcwuser commented Mar 1, 2017

@tannergooding, you bring up a few separate issues, let me try to address each of them in turn.

Back-Compat for General Inputs

If your back-compat requirement is that the new code produe the same outputs for the same inputs in all cases, then you needn't consider this PR any further. The main point of this change, more than perf improvement, is to correct the many cases where the old code gave bad results (e.g. all inputs larger than ~1.0E155 returned overflows and/or NaNs).

Back-Compat for Special Inputs

Perhaps you are willing to accept changes that represent clear mathematical improvements, but want to preserve outputs that are purely conventional for special inputs. I considered the 7 special values NaN, -0, +0, -1, +1, -inf, +inf. Since these can appear in either the real or imaginary parts of the input value, that is 7 * 7 = 49 cases. Comparing the new to the old outputs in all 49 cases, I found 16 discrepancies, all of them cases where infinite components previously produced NaNs, and now produce results that represent the mathematically accurate limits. I therefore believe these changes to be justified under the "clear mathematical improvement" criteria.

For reference, the changes are listed below, in the format "{input} -> {oldOutput} vs {newOutput}":
(-0,-Infinity) -> (NaN, NaN) vs (0,-Infinity)
(-0,Infinity) -> (NaN, NaN) vs (0,Infinity)
(0,-Infinity) -> (NaN, NaN) vs (0,-Infinity)
(0,Infinity) -> (NaN, NaN) vs (0,Infinity)
(-1,-Infinity) -> (NaN, NaN) vs (0,-Infinity)
(-1,Infinity) -> (NaN, NaN) vs (0,Infinity)
(1,-Infinity) -> (NaN, NaN) vs (0,-Infinity)
(1,Infinity) -> (NaN, NaN) vs (0,Infinity)
(-Infinity,-0) -> (NaN, NaN) vs (-1.5707963267949,Infinity)
(-Infinity,0) -> (NaN, NaN) vs (-1.5707963267949,Infinity)
(-Infinity,-1) -> (NaN, NaN) vs (-1.5707963267949,-Infinity)
(-Infinity,1) -> (NaN, NaN) vs (-1.5707963267949,Infinity)
(Infinity,-0) -> (NaN, NaN) vs (1.5707963267949,Infinity)
(Infinity,0) -> (NaN, NaN) vs (1.5707963267949,Infinity)
(Infinity,-1) -> (NaN, NaN) vs (1.5707963267949,-Infinity)
(Infinity,1) -> (NaN, NaN) vs (1.5707963267949,Infinity)
By the way, what I found for the old output is not consistent with your pseudo-code for the current effective handling. I notice your pseudo-code has some mixed infinities being mapped to 1/4 \pi and 3/4 \pi; is it possible this got mixed up with the atan2 discussion (issue #9806)? That's the only function I know where I would expect those sorts of outputs to appear.

NaN Payload Preservation

You also say

the bit-value of the NaN passed in needs to be preserved

which I take to be an additional requirement. The basic idea here is that the 53 mantissa bits can take on any values for a NaN; in the context of a NaN these bits are called the "payload". When a NaN is passed in to a function, IEEE 754 says that the NaN that comes out should have the same payload. This works best for functions which (unlike this one) have exactly one double input and exactly one double output. It's the reason you sometimes see elementary functions implemented with a clause that effectively says "if input is NaN return input".

Let me say, as an aside, that in 20+ years of numerical programming in a dozen languages and multiple dozens of frameworks and libraries, I have never once seen code that attempted to move information around via NaN payloads. The .NET framework does not make it at all easy to get at NaN payloads. We do our best, rightly in my view, to steer users away from using this "feature".

Nevertheless, I created a NaN with a non-default payload and verified that it was preserved in the NaNs that came back from my Asin and ACos code when it appeared as a component of the input. The reason this works without any special handling of NaNs in the code is that, as long as the elementary functions I call have this property, my code will have this property, and it does.

@danmoseley danmoseley added this to the Future milestone Mar 2, 2017
Copy link
Contributor

@mellinoe mellinoe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left some code style comments. I've run some really basic microbenchmarks and the new version is indeed faster. My tests showed that it was significantly faster on .NET Framework (which this code won't affect), but only about 15% faster on .NET Core. Still, it is an improvement, and the main benefit here is better handling of edge cases, anyways.

private static double Log1P(double x)
{
// Compute log(1 + x) without loss of accuracy when x is small.
// Our only use case so far is for positive values, so this isn't coded to handle negative values.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add a Debug.Assert(xp1 >= 0) here?

@@ -320,6 +372,117 @@ public static Complex Atan(Complex value)
return (ImaginaryOne / two) * (Log(One - ImaginaryOne * value) - Log(One + ImaginaryOne * value));
}

private static void Asin_Internal (double x, double y, out double b, out double bPrime, out double v) {

// This method for the inverse complex sine (and cosine) is described in
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: Can you remove the backslashes in front of the names in here (\alpha, \beta, etc.)

// Because of the test above, we can be sure that a * a will not overflow.
v = Math.Log(a + Math.Sqrt((a - 1.0) * (a + 1.0)));
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Remove extra empty lines

}

}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: extra empty line


// Invalid values
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should preserve this old behavior in a "Legacy" test case, like we did for the Tan and Tanh test data. It should only run on .NET Framework.

@@ -394,19 +399,15 @@ public static IEnumerable<object[]> ASin_Advanced_TestData()
yield return new object[] { 0.0, double.MaxValue, 0.0, Math.Log(2.0) + Math.Log(double.MaxValue) };
yield return new object[] { double.MaxValue, double.MaxValue, Math.PI / 4.0, Math.Log(2.0 * Math.Sqrt(2.0)) + Math.Log(double.MaxValue) };

// Invalid values
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto for this: preserve in a new "Legacy" test case which only runs on .NET Framework.

// For x or y large enough to overflow \alpha^2, we can simplify our formulas and avoid overflow.
if ((x > s_asinOverflowThreshold) || (y > s_asinOverflowThreshold))
{

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Extra line

}
else
{

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Extra line

// instead. Hull and Tang derive formulas for (\alpha - 1) and \beta' = \tan(u) that do not suffer
// cancelation for these cases.

// For simplicity, we assume all positive inputs and return all positive outputs. The caller should
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add Debug assertions for this (that all inputs are positive)?

}
double ratio = small / big;
v = Math.Log(2.0) + Math.Log(big) + 0.5 * Log1P(ratio * ratio);

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Line

@mellinoe
Copy link
Contributor

mellinoe commented Mar 7, 2017

@tannergooding Do you have any more input regarding this?

@tannergooding
Copy link
Member

@mellinoe, Just the last input I gave, which is we should be attempting to match the handling of NaN and Infinities in a manner users will be familiar with.

I don't think we have a formal spec for the handling of these cases (as we do with normal math operations in IEEE 754), but the C Language Spec (and by extension the C++ Language Spec) are probably as close as we can get and they do specify (very explicitly) the correct handling of these values in Annex G: IEC 60559-compatible complex arithmetic (http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf).

I think we should be attempting to maintain this compatibility in our implementation (if for no other reason than ECMA-335 indicates that we are IEC 60559 compliant and should handle all floating-point calculations in compliance with the spec).

@@ -23,6 +23,9 @@ public struct Complex : IEquatable<Complex>, IFormattable
// This is the largest x for which (Hypot(x,x) + x) will not overflow. It is used for branching inside Sqrt.
private static readonly double s_sqrtRescaleThreshold = double.MaxValue / (Math.Sqrt(2.0) + 1.0);

// This is the largest x for which 2 x^2 will not overflow. It is used for branching inside Asin and Acos.
private static readonly double s_asinOverflowThreshold = Math.Sqrt(double.MaxValue) / 2.0;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I think this should be a constant. with a comment indicating the operation used to compute the result.

However, that does not follow the existing convention (@mellinoe, do you have a preference here)?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's follow the existing convention.

@@ -246,11 +274,23 @@ public static Complex Sinh(Complex value)

public static Complex Asin(Complex value)
{
if ((value._imaginary == 0 && value._real < 0) || value._imaginary > 0)
double b, bPrime, v;
Asin_Internal(Math.Abs(value.Real), Math.Abs(value.Imaginary), out b, out bPrime, out v);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Which version of the C# compiler are we on (I can't remember)? If we are on 2.0, then we should probably make use of out var. This function is also a good candidate to be implemented as a local function.

big = x;
}
double ratio = small / big;
v = Math.Log(2.0) + Math.Log(big) + 0.5 * Log1P(ratio * ratio);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Math.Log(2.0) could be a constant with a comment.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

C compilers compute the most basic math functions at compile time. The C# compiler doesn't, but I bet the JIT-ter does it at JIT time, in which case it's only done once per run.

At some point we should create an internal static Constants class containing mathematical constants to be used in algorithms throughout the library.

For this PR, though, I'm putting this in a static readonly field as you requested.

@mellinoe
Copy link
Contributor

mellinoe commented Mar 7, 2017

I wrote a small test app which compared the results of this new and old version of Acos with the expected values described in Annex G in the document linked (http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf). The current version of Complex is not very compatible with the edge cases described there. The proposed version here is better in some of the edge cases, the same in others. EXP is the expected value defined in that standard:

Acos( 0, 0 )
  OLD:   (1.5707963267949, 0)
  NEW:   (1.5707963267949, 0)
  EXP:   (1.5707963267949, 0)
Acos( 0, NaN )
  OLD:   (NaN, NaN)
  NEW:   (NaN, NaN)
  EXP:   (1.5707963267949, NaN)
Acos( 10, ∞ )
  OLD:   (NaN, NaN)
  NEW:   (1.5707963267949, -∞)
  EXP:   (1.5707963267949, -∞)
Acos( 10, NaN )
  OLD:   (NaN, NaN)
  NEW:   (NaN, NaN)
  EXP:   (NaN, NaN)
Acos( -∞, 10 )
  OLD:   (NaN, NaN)
  NEW:   (3.14159265358979, -∞)
  EXP:   (3.14159265358979, -∞)
Acos( ∞, 10 )
  OLD:   (NaN, NaN)
  NEW:   (0, -∞)
  EXP:   (0, -∞)
Acos( -∞, ∞ )
  OLD:   (NaN, NaN)
  NEW:   (NaN, NaN)
  EXP:   (2.35619449019234, -∞)
Acos( ∞, ∞ )
  OLD:   (NaN, NaN)
  NEW:   (NaN, NaN)
  EXP:   (0.785398163397448, -∞)
Acos( ∞, NaN )
  OLD:   (NaN, NaN)
  NEW:   (NaN, NaN)
  EXP:   (NaN, ∞)
Acos( NaN, 10 )
  OLD:   (NaN, NaN)
  NEW:   (NaN, NaN)
  EXP:   (NaN, NaN)
Acos( NaN, ∞ )
  OLD:   (NaN, NaN)
  NEW:   (NaN, NaN)
  EXP:   (NaN, -∞)
Acos( NaN, NaN )
  OLD:   (NaN, NaN)
  NEW:   (NaN, NaN)
  EXP:   (NaN, NaN)

https://gist.github.com/mellinoe/86dafbbcdf4ba6a5e8bd46d727adbcb4

@dcwuser
Copy link
Contributor Author

dcwuser commented Mar 7, 2017

@mellinoe, I will make the requested changes. I also have a simple fix for the |y| < 1.0E-160 which cased very tiny imaginary parts to be ignored, which @tannergooding pointed out in his first comment. Look for an updated PR in the next couple days.

Copy link
Contributor

@mellinoe mellinoe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Latest version looks good to me, but there's a couple minor style issues.

@@ -410,13 +440,40 @@ public static IEnumerable<object[]> ASin_Advanced_TestData()

[Theory, MemberData("ASin_Advanced_TestData")]
[SkipOnTargetFramework(TargetFrameworkMonikers.NetFramework)]
public static void ASin_Advanced(double real, double imaginary, double expectedReal, double expectedImaginary)
public static void ASin_Advanced (double real, double imaginary, double expectedReal, double expectedImaginary) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Brace on next line

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There also shouldn't be a space between the method name and the opening parenthesis (we don't follow that convention elsewhere in the document).

var complex = new Complex(real, imaginary);
Complex result = Complex.Asin(complex);
VerifyRealImaginaryProperties(result, expectedReal, expectedImaginary);
}


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: extra line

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's still a couple of extra lines here, please remove

yield return new object[] { double.MinValue, 0, double.NaN, double.NaN };

// Invalid values
foreach (double invalidReal in s_invalidDoubleValues) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

braces on their own lines in this section, please

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm really sorry that you guys have to find and call out this stuff. VS's formatting defaults are set for other projects that I work on, so on my side I have to remember to go and adjust the formatting of every line when I work on your project, and sometimes I miss some. I don't know of any easy way to change VS's formatting defaults on a per-project basis; do you?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

VS2017 now has support for specifying much of this via the editorconfig file (powered by Roslyn for the C#/VB code style options). It just isn't used many places yet.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The documentation covering the options is also incomplete (we only cover a small subset of them currently here: https://docs.microsoft.com/en-us/visualstudio/ide/editorconfig-code-style-settings-reference).

However, the following Roslyn PRs do give a decent summary of (what I believe to be) all the currently available options:

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did open an issue (https://github.com/dotnet/corefx/issues/16972) to track adding the appropriate style options to our editrconfig file (and for CoreCLR https://github.com/dotnet/coreclr/issues/10089).


[Theory, MemberData("ASin_Legacy_TestData")]
[SkipOnTargetFramework(~TargetFrameworkMonikers.NetFramework)]
public static void ASin_Legacy (double real, double imaginary, double expectedReal, double expectedImaginary) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

brace on its own line

@mellinoe
Copy link
Contributor

mellinoe commented Mar 9, 2017

@tannergooding Do the new "extremely small value" test cases cover your concerns?

// Our only use case so far is for positive values, so this isn't coded to handle negative values.
Debug.Assert(!(x < 0.0));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Debug.Assert(x >= 0.0) is easier to read

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. There is a reason it's written like this, but you won't like it: it's so that NaN's won't violate the assert. One way to handle this is to add special NaN handling before we get to the assert, but I really don't want to add release code just to make debug asserts prettier. So for now I am changing these to Debug.Assert((x >= 0.0) || Double.IsNaN(x)).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, yes (it is so easy to let NaN slip your mind when reading code). In either case, I definitely think explicitly calling out double.IsNaN is useful here and will help prevent future refactorings from breaking things.

@@ -26,6 +27,9 @@ public struct Complex : IEquatable<Complex>, IFormattable
// This is the largest x for which 2 x^2 will not overflow. It is used for branching inside Asin and Acos.
private static readonly double s_asinOverflowThreshold = Math.Sqrt(double.MaxValue) / 2.0;

// This constant is used inside Asin and Acos.
private static readonly double s_log2 = Math.Log(2.0);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: This is an initonly field, not a constant.


// For x or y large enough to overflow \alpha^2, we can simplify our formulas and avoid overflow.
Debug.Assert(!(x < 0.0));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Same as above, Debug.Assert(x >= 0.0) is easier to read


// For x or y large enough to overflow \alpha^2, we can simplify our formulas and avoid overflow.
Debug.Assert(!(x < 0.0));
Debug.Assert(!(y < 0.0));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Same as above, Debug.Assert(y >= 0.0) is easier to read

@tannergooding
Copy link
Member

Yes, the new tests are covering subnormal values, so it covers my concern 😄

@danmoseley
Copy link
Member

@mellinoe, good to merge now?

}
}


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Remove extra line

@mellinoe
Copy link
Contributor

The latest version looks good to me, minus one or two remaining style problems. However, it turns out we haven't been running the System.Runtime.Numerics tests outside of Windows for some time. Until we turn that back on, we need to hold off on merging this, lest we introduce more regressions from lack of test coverage.

https://github.com/dotnet/corefx/issues/17047

@mellinoe
Copy link
Contributor

We've re-enabled these tests outside of Windows. @dcwuser could you rebase this on top of master and then push again? That will trigger the tests to run again, on all platforms.

@dcwuser
Copy link
Contributor Author

dcwuser commented Mar 14, 2017

@mellinoe: Okay, clearly my attempt to rebase off master as per your request screwed up majorly -- this PR is now trying to change 1307 files instead of 2. I'm going to try to figure out how to get back.

@dcwuser
Copy link
Contributor Author

dcwuser commented Mar 14, 2017

@mellinoe: It looks like this did bring in your test-enabling changes, so maybe this is what you want. I hope you have an easy way to just merge the two files with my changes. I will await further instructions before messing any more with the PR.

@mellinoe
Copy link
Contributor

I would suggest moving your branch back to before this giant merge, and then running git rebase --onto origin/master (or whatever your github.com/dotnet/corefx remote is set to).

@dcwuser
Copy link
Contributor Author

dcwuser commented Mar 14, 2017

This PR patch attempt is not going right...

@tannergooding
Copy link
Member

tannergooding commented Mar 14, 2017

@dcwuser, it looks like the last 'good' commit was 3b9a4c8a2fd1b4bd81a674faf1908991cf7f8b0d.

This comes from going to https://ci.dot.net/job/dotnet_corefx/job/master/job/windows_nt_debug_prtest/ and searching the build history for 16484, which gives four jobs (4640, 4616, 4544, and 4487). The last job that didn't contain an inordinate number of changes was 4544).

You could also confirm this by running 'git reflog' and viewing the commit from before your rebase command.

After you have reset back to that commit (git reset --hard commit) you can run git rebase upstream/master (assuming 'upstream' is set to https://github.com/dotnet/corefx, you can validate which remote to use by running git remove -v), that should take just your changes and play them back onto the top of upstream/master (the result should still just be your four commits being merged).

@tannergooding
Copy link
Member

You may want to run git fetch --all before running your rebase command, but after resetting to the correct state.

@mellinoe
Copy link
Contributor

I was about to post the same thing that @tannergooding did. The only merge conflicts you should encounter will be related to this PR: #17102, but those should be very easy to resolve.

@dcwuser
Copy link
Contributor Author

dcwuser commented Mar 14, 2017

Okay, I have followed you strategy. (I had been trying to avoid reset/hard/force commands, but if using them won't screw up the PR, that's great.) Things look good in my local branch, but git tells me that (as should be expected), my local and origin (not upstream) have diverged. I assume my strategy here is just to force my origin to match my local using git push --force, but before I screw up again I wanted to run that by you.

@mellinoe
Copy link
Contributor

mellinoe commented Mar 14, 2017

@dcwuser Yeah, it's probably telling you that because you have already pushed the "weirdly merged" version to GitHub here, which is what we see in the PR right now. To override it, you'll need to force-push.

@dcwuser
Copy link
Contributor Author

dcwuser commented Mar 15, 2017

@mellinoe: I believe that last push put the PR in the state you wanted.

@mellinoe
Copy link
Contributor

Thanks again for all of the work you've put into these improvements, @dcwuser . Merging now.

@mellinoe mellinoe merged commit 42f0b5f into dotnet:master Mar 15, 2017
@karelz karelz modified the milestones: Future, 2.0.0 Mar 31, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
6 participants