Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support more effective compression codecs #175

Open
adenysenko opened this issue May 30, 2012 · 8 comments

Comments

Projects
None yet
10 participants
@adenysenko
Copy link

commented May 30, 2012

Currently [DefaultSerializer] supports only slow [java.util.zip] codec.
Snappy for example performs much better.

Please check out this link: https://github.com/ning/jvm-compressor-benchmark/wiki

@cherron

This comment has been minimized.

Copy link

commented Jun 15, 2012

+1

1 similar comment
@ib84

This comment has been minimized.

Copy link

commented May 27, 2013

+1

@Tembrel

This comment has been minimized.

Copy link

commented May 27, 2013

Is there any reason to believe that the compression and decompression themselves are a bottleneck? My intuition is that they are completely dominated by network I/O costs.

@ib84

This comment has been minimized.

Copy link

commented May 27, 2013

@Tembrel Compared to network latency, time spent with compression should be acceptable. Also: it's not always about speed, also space is a factor. Besides, less space could have impact on speed, since less space could mean less / shorter GC interruptions.

@Tembrel

This comment has been minimized.

Copy link

commented May 27, 2013

Right, so this is the kind of thing that needs to be measured in context before jumping in (and adding dependencies). What evidence do you have that gzip compression performance is unacceptable in the context of Hazelcast serialization/deserialization?

@ib84

This comment has been minimized.

Copy link

commented May 27, 2013

Nobody was asking for adding concrete dependencies. But it's not nice that hazelcast has tendency to hardcodes things all over the place!
We wanna configure ourselves what we want to use. This does hold true not only for compression. Serialization is probably much more important. I also strongly dislike that range queries require Comparable objects, but disallow Non-Comparables + Comparator. It hurts to serialize Comparable Wrappers with Comparators inside each of them... I proposed an alternative on the forum, but didn't get any answer on my questions. Another thing that makes me seriously consider switching to Infinispan is the they don't rely (halfly) on hashCode like hazelcast, but allow overriding it even for class you don't have control over, by registering an "Equivalence" typeclass instance for each type. Another unnecessary wrapper, more wasted bytes.

@Tembrel

This comment has been minimized.

Copy link

commented May 27, 2013

I touched a nerve, it seems.

Getting back to this issue, though: If there's evidence that the use of compression is significantly adversely affecting Hazelcast performance, then by all means post it here.

@subnetmarco

This comment has been minimized.

Copy link

commented Jul 31, 2013

+1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.