Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Absolutely, positively perfect ranges. Never wrong. No way. #23194

Merged
merged 4 commits into from
Aug 22, 2017

Conversation

timholy
Copy link
Sponsor Member

@timholy timholy commented Aug 9, 2017

This high-precision arithmetic stuff stinks. But fortunately no one will ever, ever report a bug in this code again. And if they do, someone (probably me) is going to be swimming with the fishes. Got it?

@tkelman
Copy link
Contributor

tkelman commented Aug 10, 2017

@nanosoldier runbenchmarks(ALL, vs=":master")

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @ararslan

@test m == length(r)
# FIXME: these fail some small portion of the time
@test_skip start == first(r)
@test_skip stop == last(r)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

guess this was over-ambitious?

Copy link
Sponsor Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#20373 (comment)

However, I think (not sure, actually) that we should be able to insist on

Δ > 0 ? last(r) <= stop : last(r) >= stop

But we're not quite there yet. Needs more investigation.

@timholy
Copy link
Sponsor Member Author

timholy commented Aug 10, 2017

I'm going to be traveling for a couple of days, so let me say please don't merge yet. I think another formal property we need to insist on is that

r = start:Δ:stop
range(start, Δ, length(r)) === r

and this does not satisfy that. I don't think any of the changes here are wrong, so reviews are welcome, but I think there's a little more fixing/cleanup that needs doing.

@timholy
Copy link
Sponsor Member Author

timholy commented Aug 16, 2017

OK, this turned into a bigger deal than I wanted, but I think it's a pretty big improvement. Perhaps the most important part of this change, aside from the two bugs I fixed, was adding a whole bunch of carefully-crafted precision tests for our TwicePrecision routines. This led down a long rabbit hole, but I made a number of improvements and learned some interesting things along the way.

The most important of these is that the well-known Dekker TwicePrecision (aka, "double-double") algorithms fail miserably when one of the intermediate computations produces a subnormal. This led to a number of changes:

  • a compensated division routine that ensures that div12 won't produce intermediate subnormals
  • the realization that TwicePrecision basically makes no sense for Float16: TwicePrecision{Float16} arithmetic routines produce subnormals at the drop of a hat (there just aren't enough exponent bits in Float16), and so it's far, far better to just use Float32 or Float64 if you want something that produces higher-precision intermediate results.

As a consequence, this switches our StepRangeLen types to (by default) only use TwicePrecision for Float64 ranges; for Float32 and Float16, the ref and step fields will be Float64.

I've punted on making any of the changes discussed starting at #20373 (comment). True perfection will have to wait for a future pull request.

I removed the "backport" label, since this is by now far too intrusive of a change to backport; I can create a separate patch for fixing #23178 on 0.6 after this merges.

@timholy
Copy link
Sponsor Member Author

timholy commented Aug 16, 2017

Travis and appveyor failures seem unrelated.

@StefanKarpinski
Copy link
Sponsor Member

StefanKarpinski commented Aug 16, 2017

Funny that, I just cooked up a compensated multiplication procedure the other night when I was thinking about this during a bout of insomnia:

minabs(x, y) = ifelse(abs(x)  abs(y), x, y)
maxabs(x, y) = ifelse(abs(x) > abs(y), x, y)

function compensated_addition(x::T, y::T) where T<:AbstractFloat
    z = x + y
    z, minabs(x, y) - (z - maxabs(x, y))
end

function compensated_multiplication(x::Float64, y::Float64)
    m, p = Base.decompose(x)
    n, q = Base.decompose(y)
    x*y, (UInt64(m)*UInt64(n) & 0x000fffffffffffff)*2.0^(p + q)
end

Chaining compensated values is a real pain, it's almost as if one wants a type for that 😸

@timholy
Copy link
Sponsor Member Author

timholy commented Aug 16, 2017

Your compensated_addition is exactly add12 in this PR (and what's mistakenly called add2 in current master).

I'm not sure I quite understand what's happening in the multiplication routine, since I think the second isn't a correction of the first. If that's your intention, I think it's well-accepted that at this point the best way to achieve that with floating-point is to use the fact that fma has sufficient precision and is incredibly fast (on supported hardware), see this PR. (The ifelse is necessary only to get annoying things like Inf, NaN, and signed zeros correct.)

@StefanKarpinski
Copy link
Sponsor Member

I'm not sure I quite understand what's happening in the multiplication routine, since I think the second isn't a correction of the first.

It does work though:

julia> x, y = 0.1, 0.3
(0.1, 0.3)

julia> compensated_multiplication(x, y)
(0.03, 1.6653345369377347e-18)

julia> big(x)*big(y) - x*y
1.665334536937734749005689281744683170958705837282325806780747257107577752321959e-18

How it works: x == m*2.0^p and y == n*2.0^q so x*y == (m*n)*2.0^(p + q). Floating-point multiplication gives you the first 52 bits of m*n, you just need the last 52 bits, which you can get by doing integer multiplication, which gives the right answer mod 2^64, and you can then use & 0x000fffffffffffff to do a fast mod 2^52 operation and get those trailing bits. Then just multiply by the appropriate power of two and you've got floating-point correction to the product.

I guess using fma would work too, but I wasn't familiar with that so this is what I came up with :)

Copy link
Contributor

@simonbyrne simonbyrne left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still a bit confused why we need Veltkamp splitting. What was wrong with the old truncbits method?

Arithmetic", §4.4.1.
"""
function splitprec(x::T) where {T<:AbstractFloat}
xs, xe = frexp(x)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

frexp/ldexp is probably not the most efficient way to do this. It might be simpler to just check if its magnitude is above some threshold, and scale it down if necessary

base/float.jl Outdated
@@ -825,28 +829,7 @@ fpinttype(::Type{Float64}) = UInt64
fpinttype(::Type{Float32}) = UInt32
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still not a huge fan of this name: I keep reading this as "f pint type". It seems like there should be a more general name for this (i.e. give me the unsigned integer type with the same size as this type).

We could do typeof(reinterpret(Unsigned, zero(F))) but that seems a bit silly.

Copy link
Sponsor Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed it to uinttype.

@@ -1,5 +1,7 @@
# This file is a part of Julia. License is MIT: https://julialang.org/license

const IEEEFloat = Union{Float16, Float32, Float64}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I realise that it is beyond the scope of this PR, but it would be nice if this were an abstract type so I could reuse all this for Float128 once #22649 is fixed.

Copy link
Sponsor Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just moved this from Base.Math. I'll leave that kind of change to you 😄. Should be pretty simple, though.

@simonbyrne
Copy link
Contributor

@StefanKarpinski I think that only works if your calculation rounds down (or if you've changed the rounding mode). e.g. try

julia> h,l = compensated_multiplication(0.11,0.3)
(0.033, 8.604228440844963e-19)

julia> big(0.11)*big(0.3) -h
-2.609024107869117876158510242052582464154129416271767419321925274289242224767804e-18

@StefanKarpinski
Copy link
Sponsor Member

Yeah, might need some sign fiddling in some cases.

@timholy
Copy link
Sponsor Member Author

timholy commented Aug 17, 2017

Clever, thanks for explaining.

@timholy
Copy link
Sponsor Member Author

timholy commented Aug 17, 2017

I'm still a bit confused why we need Veltkamp splitting. What was wrong with the old truncbits method?

We don't really; I think it was left over from an older version that still used some splitting. (With the old truncbits method, the products hi1*hi2, hi1*lo2, and lo1*hi2 were all exact, but lo1*lo2 was not, so Veltkamp splitting is better.) I just deleted the method, though. The only actual use of splitprec was the convert-an-integer-exactly variant.

Thanks for the review!

base/float.jl Outdated
@@ -711,12 +713,12 @@ end

Test whether a floating point number is subnormal.
"""
function issubnormal end
function issubnormal(x::T) where {T<:IEEEFloat}
y = reinterpret(uinttype(T), x)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can use reinterpret(Unsigned, x)

@timholy
Copy link
Sponsor Member Author

timholy commented Aug 18, 2017

The last commit revealed a strange bug on 32-bit in -5.0:1.0:5.0 === -5.0:1.0:5.0. Not sure I understand why deserialization/hashing is being invoked. @vtjnash? (ref. d584142 which added a Int32 type annotation in deserialize_type)

@timholy
Copy link
Sponsor Member Author

timholy commented Aug 20, 2017

This seems good to go. I'll merge soon barring concerns.

@timholy timholy merged commit e6c8b6b into master Aug 22, 2017
@timholy timholy deleted the teh/fix_23178 branch August 22, 2017 11:59
@test cmp_sn2(Tw(xw+yw), astuple(x+y)..., slopbits)
@test cmp_sn2(Tw(xw-yw), astuple(x-y)..., slopbits)
@test cmp_sn2(Tw(xw*yw), astuple(x*y)..., slopbits)
@test cmp_sn2(Tw(xw/yw), astuple(x/y)..., slopbits)
Copy link
Sponsor Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Never, never any bugs, right? ;) Sorry to be the bearer of bad news, but I imagine you'll be interested in #23497.

Copy link
Sponsor Member Author

@timholy timholy Aug 29, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By definition, if you think you've seen a bug in this code, you're just imagining things. Really. Honest. No bugs.

Ahhh! (sploosh)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants