You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The site has very high latency right now. Perhaps this is due to the sampling technique employed. You seem to have around 1000 samples, do a double for loop when combining estimates, then randomly pick a subset of 1000. Suppose instead you use histograms, each represented as an array of pairs (value, probability). Now the same double for loop, produces pairs (value, probability). You have n^2 values vs before, so choose a subset of size n that are nicely spaced. 100 should suffice, but maybe you can get away with 20. "Nicely spaced" is the hardest part of this algorithm. If you want the histogram bars to be equally spaced the math gets a bit trickier.
As a side note, I think histograms much more accurately preserve distributions, per bit of representation.
The text was updated successfully, but these errors were encountered:
The site has very high latency right now. Perhaps this is due to the sampling technique employed. You seem to have around 1000 samples, do a double for loop when combining estimates, then randomly pick a subset of 1000. Suppose instead you use histograms, each represented as an array of pairs (value, probability). Now the same double for loop, produces pairs (value, probability). You have n^2 values vs before, so choose a subset of size n that are nicely spaced. 100 should suffice, but maybe you can get away with 20. "Nicely spaced" is the hardest part of this algorithm. If you want the histogram bars to be equally spaced the math gets a bit trickier.
As a side note, I think histograms much more accurately preserve distributions, per bit of representation.
The text was updated successfully, but these errors were encountered: