-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix #5 by using as many BigInt in-place ops as possible to avoid allo… #20
Conversation
Codecov Report
@@ Coverage Diff @@
## master #20 +/- ##
==========================================
+ Coverage 75.29% 84.02% +8.72%
==========================================
Files 5 5
Lines 757 726 -31
==========================================
+ Hits 570 610 +40
+ Misses 187 116 -71
Continue to review full report at Codecov.
|
Codecov Report
@@ Coverage Diff @@
## master #20 +/- ##
==========================================
+ Coverage 75.29% 75.45% +0.15%
==========================================
Files 5 5
Lines 757 774 +17
==========================================
+ Hits 570 584 +14
- Misses 187 190 +3
Continue to review full report at Codecov.
|
@simonbyrne, just pinging you here if you wouldn't mind taking a look; I know we've talked strategy here a bit and I think this is looking pretty good. Here's a taste of the perf improvements by pre-allocating a few BigInts and doing in-place operations: Before: julia> run(@benchmarkable Parsers.defaultparser(io, r) setup=(io = IOBuffer("0.0017138347201173243"); r = Parsers.Result($T)))
BenchmarkTools.Trial:
memory estimate: 408 bytes
allocs estimate: 24
--------------
minimum time: 1.098 μs (0.00% GC)
median time: 1.277 μs (0.00% GC)
mean time: 1.363 μs (0.00% GC)
maximum time: 29.360 μs (0.00% GC)
--------------
samples: 10000
evals/sample: 1 After: julia> run(@benchmarkable Parsers.defaultparser(io, r) setup=(io = IOBuffer("0.0017138347201173243"); r = Parsers.Result($T)))
BenchmarkTools.Trial:
memory estimate: 0 bytes
allocs estimate: 0
--------------
minimum time: 216.000 ns (0.00% GC)
median time: 232.000 ns (0.00% GC)
mean time: 244.993 ns (0.00% GC)
maximum time: 36.176 μs (0.00% GC)
--------------
samples: 10000
evals/sample: 1 And compared to Base.parse: julia> @benchmark parse(Float64, "0.0017138347201173243")
BenchmarkTools.Trial:
memory estimate: 0 bytes
allocs estimate: 0
--------------
minimum time: 299.788 ns (0.00% GC)
median time: 310.359 ns (0.00% GC)
mean time: 329.445 ns (0.00% GC)
maximum time: 928.122 ns (0.00% GC)
--------------
samples: 10000
evals/sample: 245 |
const QUO = BigInt() | ||
const REM = BigInt() | ||
const SCL = BigInt() | ||
|
||
const BIG_E = UInt8('E') | ||
const LITTLE_E = UInt8('e') | ||
|
||
const bipows5 = [big(5)^x for x = 0:325] | ||
|
||
function roundQuotient(num, den) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume this is doing something like round(num/den)
, but round-to-nearest?
Nice. It generally looks okay. The main problem is handling multithreading. For worst-case test values, see https://www.icir.org/vern/papers/testbase-report.pdf |
src/floats.jl
Outdated
bex = bitlength(num) - significantbits(T) | ||
bex <= 0 && return ldexp(T(num), exp) | ||
quo = roundQuotient(num, big(1) << bex) | ||
MPZ.mul_2exp!(MPZ.set_si!(ONE, 1), bex) | ||
quo = roundQuotient(num, ONE) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just so I understand: this is essentially implementing Float64(x::BigInt)
?
Man, we need to improve the performance of the one in Base.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@simonbyrne thanks for taking a look; good call on the multithreaded case, we should be good now. I also added a bunch of tests from the paper you linked; great stuff! |
…cations.