Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alternative lazy value implementations #30

Closed
jbgi opened this issue Feb 15, 2016 · 10 comments
Closed

Alternative lazy value implementations #30

jbgi opened this issue Feb 15, 2016 · 10 comments
Labels

Comments

@jbgi
Copy link
Member

jbgi commented Feb 15, 2016

Additionally from the current synchronized-based implementation, two alternative implementations should be available and configurable via annotation:

  • one based on AtomicReference<Adt>
  • one based on AtomicReference<WeakReference<Adt>>

which allow better throughput than the synchronized based implementation at the price of possible concurrent evaluation.

@danieldietrich
Copy link

Hi, do you think that it makes a big difference to have them when using the 'double checked locking' pattern?

transient volatile Supplier<? extends T> supplier; // initialized on construction
volatile T value;

T get() {
    if (supplier != null) {
        synchronized (this) {
            if (supplier != null) {
                value = supplier.get();
                supplier = null; // free mem
            }
        }
    }
    return value;
}

@jbgi
Copy link
Member Author

jbgi commented Feb 15, 2016

scalaz/scalaz#1103 suggests that it is significant, but I'm going to benchmark.

@danieldietrich
Copy link

Thx

@danieldietrich
Copy link

I implemented a similar test you mentioned above (scalaz). Would be interesting to see the difference when you implemented alternative lazy value impls.

public class Bench {

    static final int COUNT = 100000;
    static final int ITERATIONS = 100;
    static final int WARMUP = 10;

    public static void main(String[] args) {

        // Average time after 100 iterations: 6.490000 ms
        timed(() -> scala.collection.immutable.Stream.range(0, COUNT, 1, new scala.math.Numeric.IntIsIntegral$()).length());

        // Average time after 100 iterations: 7.070000 ms (using Lazy with synchronized)
        // Average time after 100 iterations: 5.500000 ms (using Lazy without synchronized)
        timed(() -> javaslang.collection.Stream.range(0, COUNT).length());

        // Average time after 100 iterations: 0.600000 ms
        timed(() -> java.util.stream.IntStream.range(0, COUNT).boxed().count());
    }

    static void timed(Runnable stuff) {
        long vs = 0;
        for (int i = 0; i < WARMUP; i++) {
            System.gc(); // try to get rid of potential GC pauses
            stuff.run();
        }
        for (int i = 0; i < ITERATIONS; i++) {
            System.gc(); // try to get rid of potential GC pauses
            long t = System.currentTimeMillis();
            stuff.run();
            vs += System.currentTimeMillis() - t;
        }
        System.out.printf("Average time after %d iterations: %f ms\n", ITERATIONS, ((double) vs) / ITERATIONS);
    }

}

@jbgi
Copy link
Member Author

jbgi commented Feb 15, 2016

(edit: removed false implementation)

@jbgi
Copy link
Member Author

jbgi commented Feb 15, 2016

my initial benchmarks do not show any improvement with AtomicRef over synchronized in mono-threaded code. Need to check the scalaz result to understand better.

@danieldietrich
Copy link

Peter Lawrey suggests to use volatile over AtomicReference (but I think it does not match our use-case regarding synchronized) http://stackoverflow.com/questions/281132/java-volatile-reference-vs-atomicreference/497150#497150.

@jbgi
Copy link
Member Author

jbgi commented Feb 16, 2016

@danieldietrich I used your Bench with my List implementation and

    static final int COUNT = 500000;
    static final int ITERATIONS = 200;
    static final int WARMUP = 100;

results:

  • not synchronized:
    Average time after 200 iterations: 14.260000 ms
  • Lazy synchronized (double-checked locking):
    Average time after 200 iterations: 17.850000 ms
  • Lazy AtomicReference:
    Average time after 200 iterations: 18.785000 ms
  • Lazy synchronized WeakReference (double-checked locking):
    Average time after 200 iterations: 20.530000 ms
  • Lazy AtomicReference of WeakReference:
    Average time after 200 iterations: 18.230000 ms

So it looks like usage of AtomicReference are only slightly useful in conjunction with weak reference (as in the scalaz case).

edit: updated figures with better impl for Lazy AtomicReference.

@danieldietrich
Copy link

@jbgi great, interesting insights! I will stay with the one impl in my case - simple and safe. If possible I prefer the "give me no configuration hell, just give me one option" API design principle :-)

@jbgi
Copy link
Member Author

jbgi commented Feb 18, 2016

Closing as there is very few use cases for weak laziness and the current double check locking idiom is fast enough.

@jbgi jbgi closed this as completed Feb 18, 2016
@jbgi jbgi added the wontfix label Feb 18, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants