Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance of searchsorted worse for the O(1) method #54336

Closed
Moelf opened this issue May 2, 2024 · 7 comments
Closed

Performance of searchsorted worse for the O(1) method #54336

Moelf opened this issue May 2, 2024 · 7 comments
Labels
domain:sorting Put things in order performance Must go faster

Comments

@Moelf
Copy link
Sponsor Contributor

Moelf commented May 2, 2024

1.10.3

julia> using BenchmarkTools

julia> e1 = 0:0.1:1;

julia> @benchmark searchsortedlast($e1, x) setup=(x=rand())
BenchmarkTools.Trial: 10000 samples with 998 evaluations.
 Range (min  max):  14.165 ns  22.398 ns  ┊ GC (min  max): 0.00%  0.00%
 Time  (median):     14.586 ns              ┊ GC (median):    0.00%
 Time  (mean ± σ):   14.672 ns ±  0.432 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%

         ██
  ▂▂▂▂▂▃▄██▅▅▃▆▆▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▁▂▁▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂ ▂
  14.2 ns         Histogram: frequency by time        17.1 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.

julia> e2 = collect(0:0.1:1);

julia> @benchmark searchsortedlast($e2, x) setup=(x=rand())
BenchmarkTools.Trial: 10000 samples with 1000 evaluations.
 Range (min  max):  4.549 ns  13.986 ns  ┊ GC (min  max): 0.00%  0.00%
 Time  (median):     6.222 ns              ┊ GC (median):    0.00%
 Time  (mean ± σ):   5.920 ns ±  0.717 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%

       ▄▁                           ▁▃▇▆▅▂██▄▄
  ▁▁▁▆▇██▆▃▇▆▆▃▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▂▂▂▄▄██████████▇▅▁▁▁▁▁▁▁▁▁▁▁▁ ▃
  4.55 ns        Histogram: frequency by time        7.17 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.

This is interesting considering we have a special code path for this:

function searchsortedlast(a::AbstractRange{<:Real}, x::Real, o::FastRangeOrderings)::keytype(a)

@Moelf Moelf changed the title Performance of searchsorted worse for the O(1) Performance of searchsorted worse for the O(1) method May 2, 2024
@jishnub
Copy link
Contributor

jishnub commented May 3, 2024

Is a few ns really a concern?

@Moelf
Copy link
Sponsor Contributor Author

Moelf commented May 3, 2024

Yes to some degree, we routinely fill thousands of histograms with hundreds of millions of entries.

@giordano giordano added performance Must go faster domain:sorting Put things in order labels May 3, 2024
@mikmoore
Copy link
Contributor

mikmoore commented May 3, 2024

Fiddling with the source code, I can reduce the runtime by half by replacing a[n] access with another variable (like h). So it appears a significant amount of runtime is devoted to materializing a[n] with its TwicePrecision step.

Note that this issue is a tad misleading. The slow part is accurately computing a[n] (which any accurate method must sometimes do, for tie-breaking purposes), which the collect offloads from the benchmarking here. So we can't be faster than a[n]. This can simply be faster when it's merely a value lookup rather than a calculation.

Which is to say that any significant improvement will require either that a[n] be faster or that we do it less often. I doubt it can be much faster (and still be right) as it's unlikely it was written poorly in the first place. It could be done less often if we used a finer-grained rounding so that we can avoid the check in non-borderline cases.

Here is a toy concept of checking a[n] less often. There may be off-by-one style errors in this implementation -- I was looking at the run-speed concept rather than ensuring it was definitely always correct. Also, for very long arrays (when nc > maxintfloat(T) the original (essentially with c=1) maybe does (I'd have to think harder) and this implementation (c>1) definitely does risk roundoff error. So larger c increases the regime where roundoff error is possible.

function dev_searchsortedlast(a::AbstractRange{<:Real}, x::Real)::keytype(a)
    o = Base.Order.Forward # should be an input
    Base.require_one_based_indexing(a)
    f, h, l = first(a), step(a), last(a)
    if Base.Order.lt(o, x, f)
        0
    elseif !Base.Order.lt(o, x, l) || h == 0
        length(a)
    else
        c = 2^3
        nc = round(Int, (x - f) / h * c, RoundNearest)
        n,r = fldmod(nc, c)
        # if r==0, we are on the border between bins and n+1 might be too big
        iszero(r) && Base.Order.lt(o, x, a[n+1]) ? n : n+1
    end
end

But it only increases throughput by about 20% for me. Finer discretization (larger c) didn't improve things noticeably more. Personally, I'm not completely sold that this approach is worth it.

julia> using BenchmarkTools

julia> e1 = 0:0.1:1;

julia> @benchmark searchsortedlast($e1, x) setup=(x=rand())
BenchmarkTools.Trial: 10000 samples with 997 evaluations.
 Range (min … max):  21.264 ns … 77.232 ns  ┊ GC (min … max): 0.00% … 0.00%
 Time  (median):     21.364 ns              ┊ GC (median):    0.00%
 Time  (mean ± σ):   22.000 ns ±  3.790 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%

  █▆                                                          ▁
  ███▇▆▇▆▆▄▃▃▃▃▁▃▄▃▁▃▁▃▃▃▁▃▁▁▄▁▁▃▁▃▃▁▁▁▁▄▅▇▅▅▅▄▄▄▅▃▃▁▁▃▃▃▃▆▆▅ █
  21.3 ns      Histogram: log(frequency) by time      42.2 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.

julia> @benchmark dev_searchsortedlast($e1, x) setup=(x=rand())
BenchmarkTools.Trial: 10000 samples with 998 evaluations.
 Range (min … max):  16.633 ns … 85.872 ns  ┊ GC (min … max): 0.00% … 0.00%
 Time  (median):     16.834 ns              ┊ GC (median):    0.00%
 Time  (mean ± σ):   18.441 ns ±  4.441 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%

  █
  █▃▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▁▂▂▂▂▂▂▂▂▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▃▃▂▂ ▂
  16.6 ns         Histogram: frequency by time          29 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.

julia> e3 = StepRangeLen(0.0, 0.1, 11); # less precise step

julia> @benchmark searchsortedlast($e3, x) setup=(x=rand())
BenchmarkTools.Trial: 10000 samples with 998 evaluations.
 Range (min … max):  14.128 ns … 66.032 ns  ┊ GC (min … max): 0.00% … 0.00%
 Time  (median):     15.030 ns              ┊ GC (median):    0.00%
 Time  (mean ± σ):   15.812 ns ±  4.555 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%

  ▇▇█▁                                                        ▂
  ████▆▄▃▃▅▆▆▇▇▇██▆▅▄▄▁▃▁▁▁▃▃▁▁▁▁▁▄▇███▇▆▆▅▅▁▄▃▄▄▇▇▇▇▇▅▅▅▃▄▅▅ █
  14.1 ns      Histogram: log(frequency) by time        39 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.

julia> @benchmark dev_searchsortedlast($e3, x) setup=(x=rand())
BenchmarkTools.Trial: 10000 samples with 999 evaluations.
 Range (min … max):   9.309 ns … 71.071 ns  ┊ GC (min … max): 0.00% … 0.00%
 Time  (median):      9.610 ns              ┊ GC (median):    0.00%
 Time  (mean ± σ):   10.699 ns ±  2.950 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%

   ▆█ ▄
  ▅██▆█▆▂▂▂▂▂▂▂▂▂▂▂▂▂▁▂▁▁▁▁▁▁▂▁▁▁▁▁▁▁▁▁▁▁▁▁▂▁▁▁▁▁▁▁▂▁▁▁▁▁▂▂▅▅ ▃
  9.31 ns         Histogram: frequency by time        16.9 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.

Note the tests with e3, which show the benefit of a range with less step precision (compare e1.step to e3.step).

@Moelf
Copy link
Sponsor Contributor Author

Moelf commented May 3, 2024

wait why do we need to look up a[n]? my understanding is for uniform binning you just need:
https://github.com/Moelf/FHist.jl/blob/160d675455a9e40a909e3f97d15a3f9a6c5e0659/src/polybinedges.jl#L45

where the inv_step is just a pre-computed inv(step(range))

@mikmoore
Copy link
Contributor

mikmoore commented May 3, 2024

wait why do we need to look up a[n]?

In infinite precision you can use simple arithmetic like FHist.jl attempts to do. But these are floating point numbers with finite precision. Let's apply the algorithm linked from FHist.jl to the following case:

julia> v1 = 0.0:0.2:1
0.0:0.2:1.0

julia> (step(v1), v1.step) # notice that `step(v1)` does not give the full info
(0.2, Base.TwicePrecision{Float64}(0.19999999999999973, 2.6645352591003756e-16))

julia> collect(v1)
6-element Vector{Float64}:
 0.0
 0.2
 0.4
 0.6
 0.8
 1.0

julia> (0.6 - first(v1)) / step(v1) # == 0.6 / 0.2
2.9999999999999996

julia> floor(Int, (0.6 - first(v1)) / step(v1)) + 1 # wrong answer
3

julia> (0.6 - first(v1)) * inv(step(v1)) # note: `inv(step())` gets lucky in this case, but is less accurate in general
3.0

julia> searchsortedlast(v1, 0.6) # right answer
4

Notice that the calculation in FHist.jl actually got the answer right if we used * inv(step). But this was pure luck. In general, this will be less accurate and more prone to mistakes. Repeat the above with a different range. Notice that this range has a step that is represented exactly, so this should be easier.

julia> v2 = 0.0:49.0:196.0
0.0:49.0:196.0

julia> (step(v2), v2.step) # the `step` is represented exactly in just a Float64
(49.0, Base.TwicePrecision{Float64}(49.0, 0.0))

julia> (3*49.0 - first(v2)) / step(v2) + 1 # will give the correct answer
4.0

julia> (3*49.0 - first(v2)) * inv(step(v2)) + 1 # will give the wrong answer
3.9999999999999996

Ultimately, these finite precision issues mean that it is very difficult (expensive) to get the answer definitely correct in all cases from just the start and step. To get it right, it's safer and not-more expensive to simply check against a value in the collection to see if you got the answer right.

@Moelf
Copy link
Sponsor Contributor Author

Moelf commented May 3, 2024

sigh, right, I remember it all now, this is the trade-off of our range objects being more accurate. Thanks.

@Moelf Moelf closed this as completed May 3, 2024
@LilithHafner
Copy link
Member

(as expected, the O(1) method outperforms the O(log(n)) method on larger inputs:

julia> e1 = 0:0.00001:1;

julia> @benchmark searchsortedlast($e1, x) setup=(x=rand())
BenchmarkTools.Trial: 10000 samples with 999 evaluations.
 Range (min  max):  8.258 ns  13.847 ns  ┊ GC (min  max): 0.00%  0.00%
 Time  (median):     8.341 ns              ┊ GC (median):    0.00%
 Time  (mean ± σ):   8.419 ns ±  0.273 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%

      ▄   █                                                   
  ▃▁▁▁█▁▁▁█▁▁▁▃▁▁▁▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▂▁▁▁▂▁▁▁▅▁▁▁▄▁▁▁▂ ▂
  8.26 ns        Histogram: frequency by time        8.84 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.

julia> e2 = collect(0:0.00001:1);

julia> @benchmark searchsortedlast($e2, x) setup=(x=rand())
BenchmarkTools.Trial: 10000 samples with 986 evaluations.
 Range (min  max):  52.104 ns  83.883 ns  ┊ GC (min  max): 0.00%  0.00%
 Time  (median):     57.809 ns              ┊ GC (median):    0.00%
 Time  (mean ± σ):   56.245 ns ±  2.712 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%

   ▆▇▂▃▂ ▂                      ▄        ▇█▄▅▃▁▃▂             ▂
  ▆████████▆▇▅▇▆▆▆▄▅▅▄▄▃▃▃▁▁▁▁▄▇████▅▇▆▅▇██████████▇▇█▇▇▆▆▄▅▅ █
  52.1 ns      Histogram: log(frequency) by time      60.6 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
domain:sorting Put things in order performance Must go faster
Projects
None yet
Development

No branches or pull requests

5 participants