Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move to Java 8 #84

Merged
merged 9 commits into from
Dec 19, 2017
Merged

Move to Java 8 #84

merged 9 commits into from
Dec 19, 2017

Conversation

spkrka
Copy link
Member

@spkrka spkrka commented Dec 17, 2017

This is a continuation of PR #80 with some additional changes

  • Minor code refactoring
  • Rebased on master with maxKeyLength changes
  • Remove HostAndPort from API to avoid Guava version conflicts

@spkrka
Copy link
Member Author

spkrka commented Dec 17, 2017

I ran the built in benchmark and got the following results:

Java 8 patch:
   531,857 (   532,599) req/s. 0.375054881 ms average latency. 82,714,350 req total. process= 34%, total= 96%
   539,339 (   532,798) req/s. 0.375082701 ms average latency. 83,253,950 req total. process= 33%, total= 96%
   533,058 (   533,828) req/s. 0.374174568 ms average latency. 83,787,250 req total. process= 33%, total= 95%
   524,107 (   533,959) req/s. 0.373943263 ms average latency. 84,311,600 req total. process= 33%, total= 95%
   527,468 (   534,254) req/s. 0.373937738 ms average latency. 84,839,400 req total. process= 33%, total= 94%
   546,497 (   535,877) req/s. 0.372827168 ms average latency. 85,386,150 req total. process= 34%, total= 96%
   530,297 (   535,486) req/s. 0.373091357 ms average latency. 85,917,000 req total. process= 32%, total= 95%
   522,152 (   532,827) req/s. 0.375049043 ms average latency. 86,439,400 req total. process= 32%, total= 94%
   509,555 (   530,013) req/s. 0.376635509 ms average latency. 86,950,750 req total. process= 33%, total= 92%
   523,221 (   528,749) req/s. 0.377813991 ms average latency. 87,474,200 req total. process= 34%, total= 95%
   529,417 (   528,505) req/s. 0.378297757 ms average latency. 88,003,850 req total. process= 33%, total= 95%
   528,923 (   527,464) req/s. 0.378637377 ms average latency. 88,533,600 req total. process= 32%, total= 96%

Master:
   516,988 (   526,206) req/s. 0.379126331 ms average latency. 83,928,500 req total. process= 30%, total= 95%
   518,609 (   523,434) req/s. 0.381329870 ms average latency. 84,447,350 req total. process= 30%, total= 94%
   517,543 (   522,424) req/s. 0.381831344 ms average latency. 84,965,300 req total. process= 31%, total= 95%
   490,115 (   517,398) req/s. 0.385809906 ms average latency. 85,455,650 req total. process= 29%, total= 96%
   535,995 (   517,993) req/s. 0.385377292 ms average latency. 85,991,900 req total. process= 30%, total= 95%
   550,191 (   520,936) req/s. 0.383012855 ms average latency. 86,542,550 req total. process= 33%, total= 96%
   499,117 (   517,598) req/s. 0.385142833 ms average latency. 87,041,900 req total. process= 30%, total= 95%
   510,090 (   516,316) req/s. 0.386552983 ms average latency. 87,552,200 req total. process= 32%, total= 95%
   496,573 (   514,718) req/s. 0.387608950 ms average latency. 88,049,000 req total. process= 30%, total= 95%
   468,376 (   510,361) req/s. 0.390757088 ms average latency. 88,517,600 req total. process= 29%, total= 94%

Not sure how much it can be trusted. Right not it looks like using Java8 futures is a slight optimization, but not by a large margin.

At least this might indicate that there is no performance regression.

CallbackSettableFuture<T> newFuture = new CallbackSettableFuture<>(future);
future.addListener(newFuture, executor);
return newFuture;
return future.thenApplyAsync(Function.identity(), executor);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure if this is correct - will exceptions run on this executor? How could I make that happen?

Copy link
Contributor

@danielnorberg danielnorberg Dec 18, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Docs and a quick empirical test indicates that the executor supplied to thenApplyAsync is not used for exceptions.

The handleAsync method seems to cause both normal values and exceptions to be run on the supplied executor.

E.g.

return future.handleAsync((v, e) -> {
  if (v != null) {
    return v;
  } else {
    throw new CompletionException(e);
  }
}, executor);

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even better:

return future.whenCompleteAsync((v, e) -> { }, executor);

public ListenableFuture<List<T>> unwrapList(ListenableFuture<List<GetResult<T>>> future) {
return Utils.transform(future, listResultUnwrapper);
public CompletionStage<List<T>> unwrapList(CompletionStage<List<GetResult<T>>> future) {
return future.thenApplyAsync(listResultUnwrapper, Utils.SAME_THREAD_EXECUTOR);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use simple thenApply(...) instead.

return Utils.transform(future, listResultDecoder);
public CompletionStage<List<GetResult<T>>> decodeList(
CompletionStage<List<GetResult<byte[]>>> future) {
return future.thenApplyAsync(listResultDecoder, Utils.SAME_THREAD_EXECUTOR);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as above.

try {
connectFuture().toCompletableFuture().get(waitTime, unit);
} catch (final InterruptedException | ExecutionException e) {
throw new RuntimeException(e);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should you handle interrupter correctly?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants