Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Faster implementation of Float64(::BigInt), Float32(::BigInt) and Float16(::BigInt) #31502

Merged
merged 6 commits into from
Apr 22, 2019

Conversation

narendrakpatel
Copy link
Contributor

@narendrakpatel narendrakpatel commented Mar 27, 2019

Closes #31293
Implements a faster version of Float64(::BigInt), Float32(::BigInt) and Float16(::BigInt).
Rounding Behaviour: RoundNearest

base/gmp.jl Outdated Show resolved Hide resolved
@simonbyrne
Copy link
Contributor

In general looks pretty good: would be nice if we could make this more generic, as we will need to repeat it for Float32 and Float16.

Need to also check that we cover good test cases as well.

@narendrakpatel
Copy link
Contributor Author

narendrakpatel commented Mar 29, 2019

Currently, I am trying to refactor the code to remove unnecessary computations and the code 'prettier'.
For test cases, I am considering to check for double rounding.

For generalization purposes, I am attempting to write code by using a variable prec which will assume the value 24 in the case of Float32 and 11 in the case of Float16. I believe since a general method can be made for Float32 and Float16 (but maybe not for Float64 as it can consume 3 Limbs).
Any suggestions?

Also work related to Float32 and Float16 should be done in this issue itself or a separate issue will be opened?

@simonbyrne
Copy link
Contributor

Lets keep it all in the same pull request.

To help make it generic, there are various utility functions that can be of use, e.g.:

julia/base/float.jl

Lines 571 to 582 in 91151ab

"""
precision(num::AbstractFloat)
Get the precision of a floating point number, as defined by the effective number of bits in
the mantissa.
"""
function precision end
precision(::Type{Float16}) = 11
precision(::Type{Float32}) = 24
precision(::Type{Float64}) = 53
precision(::T) where {T<:AbstractFloat} = precision(T)

julia/base/float.jl

Lines 846 to 873 in 91151ab

# bit patterns
reinterpret(::Type{Unsigned}, x::Float64) = reinterpret(UInt64, x)
reinterpret(::Type{Unsigned}, x::Float32) = reinterpret(UInt32, x)
reinterpret(::Type{Signed}, x::Float64) = reinterpret(Int64, x)
reinterpret(::Type{Signed}, x::Float32) = reinterpret(Int32, x)
sign_mask(::Type{Float64}) = 0x8000_0000_0000_0000
exponent_mask(::Type{Float64}) = 0x7ff0_0000_0000_0000
exponent_one(::Type{Float64}) = 0x3ff0_0000_0000_0000
exponent_half(::Type{Float64}) = 0x3fe0_0000_0000_0000
significand_mask(::Type{Float64}) = 0x000f_ffff_ffff_ffff
sign_mask(::Type{Float32}) = 0x8000_0000
exponent_mask(::Type{Float32}) = 0x7f80_0000
exponent_one(::Type{Float32}) = 0x3f80_0000
exponent_half(::Type{Float32}) = 0x3f00_0000
significand_mask(::Type{Float32}) = 0x007f_ffff
sign_mask(::Type{Float16}) = 0x8000
exponent_mask(::Type{Float16}) = 0x7c00
exponent_one(::Type{Float16}) = 0x3c00
exponent_half(::Type{Float16}) = 0x3800
significand_mask(::Type{Float16}) = 0x03ff
# integer size of float
uinttype(::Type{Float64}) = UInt64
uinttype(::Type{Float32}) = UInt32
uinttype(::Type{Float16}) = UInt16

@narendrakpatel
Copy link
Contributor Author

I will make sure to use them as much as possible.

@narendrakpatel narendrakpatel changed the title WIP: Faster implementation of Float64(::BigInt) Faster implementation of Float64(::BigInt), Float32(::BigInt) and Float16(::BigInt) Mar 31, 2019
@narendrakpatel narendrakpatel marked this pull request as ready for review March 31, 2019 21:03
@narendrakpatel
Copy link
Contributor Author

@simonbyrne I have implemented the conversion methods with RoundNearest rounding behavior.
I had a doubt regarding tests. Currently, there is no test/gmp.jl file so shall I create a new file and write tests in it or shall I write tests in float.jl?

@rfourquet
Copy link
Member

there is no test/gmp.jl

But there is "test/bigint.jl".

@narendrakpatel
Copy link
Contributor Author

Yes, there is. 😅
I will just append tests in it. Thanks, @rfourquet

@narendrakpatel
Copy link
Contributor Author

@simonbyrne is this okay?

@simonbyrne
Copy link
Contributor

In general, it looks pretty good. For each typeT it would be a good idea to check:

n = exponent(floatmax(T))
@test T(big"2"^(n+1)) === T(Inf)
@test T(big"2"^(n+1) - big"2"^(n-precision(T))) === T(Inf)
@test T(big"2"^(n+1) - big"2"^(n-precision(T))-1) === floatmax(T)

@narendrakpatel narendrakpatel force-pushed the np/bigint-to-float branch 2 times, most recently from a84d770 to 0aa22e9 Compare April 3, 2019 18:11
@narendrakpatel
Copy link
Contributor Author

@simonbyrne I have updated the tests. Is there any way I can generalize other tests too? The idea I used is to test for conversion of BigInts which use a different number of Limb.
If there is no issue in the current format of tests then this PR might be ready to merge.

@simonbyrne simonbyrne closed this Apr 3, 2019
@simonbyrne simonbyrne reopened this Apr 3, 2019
base/gmp.jl Outdated Show resolved Hide resolved
Copy link
Contributor

@simonbyrne simonbyrne left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can remove the existing definition of round-to-nearest conversion, and make it this one.

After this, I think it's good to go.

base/gmp.jl Outdated Show resolved Hide resolved
base/gmp.jl Outdated Show resolved Hide resolved
base/gmp.jl Outdated Show resolved Hide resolved
@narendrakpatel
Copy link
Contributor Author

narendrakpatel commented Apr 3, 2019

@simonbyrne The PR has been updated accordingly.

@narendrakpatel
Copy link
Contributor Author

narendrakpatel commented Apr 4, 2019

If this is okay, can we merge this? (before the 1.2 freeze 😄 ) @simonbyrne

base/gmp.jl Outdated Show resolved Hide resolved
base/gmp.jl Outdated Show resolved Hide resolved
base/gmp.jl Outdated Show resolved Hide resolved
@simonbyrne
Copy link
Contributor

Looks like there is a test failure on 32-bit Linux:

Test Failed at /buildworker/worker/tester_linux32/build/share/julia/test/bigint.jl:447
  Expression: Float64(-x) == -(Float64(x))
   Evaluated: -1.1579208923884626e77 == -1.1579208923731622e77

@narendrakpatel
Copy link
Contributor Author

@simonbyrne All the tests are now passing except the following:

  • package_freebsd64 : error during make
  • appveyor : error during running test Pkg/pkg
  • travis-ci : failed due to connection time out.

@fredrikekre fredrikekre added domain:bignums BigInt and BigFloat performance Must go faster labels Apr 11, 2019
@narendrakpatel
Copy link
Contributor Author

@simonbyrne If this is okay, can this be merged?

@simonbyrne simonbyrne merged commit 85daeab into JuliaLang:master Apr 22, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
domain:bignums BigInt and BigFloat performance Must go faster
Projects
None yet
Development

Successfully merging this pull request may close these issues.

BigInt to Float conversions are unnecessarily slow
4 participants