New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Alternative lazy value implementations #30
Comments
Hi, do you think that it makes a big difference to have them when using the 'double checked locking' pattern? transient volatile Supplier<? extends T> supplier; // initialized on construction
volatile T value;
T get() {
if (supplier != null) {
synchronized (this) {
if (supplier != null) {
value = supplier.get();
supplier = null; // free mem
}
}
}
return value;
} |
scalaz/scalaz#1103 suggests that it is significant, but I'm going to benchmark. |
Thx |
I implemented a similar test you mentioned above (scalaz). Would be interesting to see the difference when you implemented alternative lazy value impls. public class Bench {
static final int COUNT = 100000;
static final int ITERATIONS = 100;
static final int WARMUP = 10;
public static void main(String[] args) {
// Average time after 100 iterations: 6.490000 ms
timed(() -> scala.collection.immutable.Stream.range(0, COUNT, 1, new scala.math.Numeric.IntIsIntegral$()).length());
// Average time after 100 iterations: 7.070000 ms (using Lazy with synchronized)
// Average time after 100 iterations: 5.500000 ms (using Lazy without synchronized)
timed(() -> javaslang.collection.Stream.range(0, COUNT).length());
// Average time after 100 iterations: 0.600000 ms
timed(() -> java.util.stream.IntStream.range(0, COUNT).boxed().count());
}
static void timed(Runnable stuff) {
long vs = 0;
for (int i = 0; i < WARMUP; i++) {
System.gc(); // try to get rid of potential GC pauses
stuff.run();
}
for (int i = 0; i < ITERATIONS; i++) {
System.gc(); // try to get rid of potential GC pauses
long t = System.currentTimeMillis();
stuff.run();
vs += System.currentTimeMillis() - t;
}
System.out.printf("Average time after %d iterations: %f ms\n", ITERATIONS, ((double) vs) / ITERATIONS);
}
} |
(edit: removed false implementation) |
my initial benchmarks do not show any improvement with AtomicRef over synchronized in mono-threaded code. Need to check the scalaz result to understand better. |
Peter Lawrey suggests to use volatile over AtomicReference (but I think it does not match our use-case regarding synchronized) http://stackoverflow.com/questions/281132/java-volatile-reference-vs-atomicreference/497150#497150. |
@danieldietrich I used your Bench with my List implementation and static final int COUNT = 500000;
static final int ITERATIONS = 200;
static final int WARMUP = 100; results:
So it looks like usage of AtomicReference are only slightly useful in conjunction with weak reference (as in the scalaz case). edit: updated figures with better impl for Lazy AtomicReference. |
@jbgi great, interesting insights! I will stay with the one impl in my case - simple and safe. If possible I prefer the "give me no configuration hell, just give me one option" API design principle :-) |
Closing as there is very few use cases for weak laziness and the current double check locking idiom is fast enough. |
Additionally from the current synchronized-based implementation, two alternative implementations should be available and configurable via annotation:
AtomicReference<Adt>
AtomicReference<WeakReference<Adt>>
which allow better throughput than the synchronized based implementation at the price of possible concurrent evaluation.
The text was updated successfully, but these errors were encountered: