-
Notifications
You must be signed in to change notification settings - Fork 51
Description
If using IMap.set with values larger than 1 MByte the most limiting factor for performance shall be the network bandwidth. On Ubuntu 18.04 in VirtualBox and i5-6300U I first checked the achievable TCP performance on localhost with "qperf localhost tcp_bw tcp_lat":
bw = 4.26 GB/sec and
latency = 56 us.
A good middleware would be able to transfer big key-value pairs with 95% of the TCP bandwidth (e.g. a DDS implementation from RTI).
With the C++-client using the simple IMap<string, vector<uint8_t> >.set(key, value) method I got reproduceable 1.4 MBytes/Sec for values from 1 to 16 MBytes size (s. attachment).
First, I was curious, if Hazelcast as whole has so low performance and checked the same test for the java client and the same instance of Hazelcast-Server. The throughput was a little bit instable, but in the range of 200 MBytes/Sec. It is still only 5% of the available network bandwidth, but factor 140 higher than C++ implementation.
Interestingly the IMap.get in C++ has higher performance of around 90 MBytes/Sec.
In the tests around IMap in C++ I have not found any one checking performance especially for larger kv-pairs, perhaps it is a good idea to put one.
The Cpp-library was built with
cmake .. -DHZ_LIB_TYPE=STATIC -DHZ_BIT=64 -DCMAKE_BUILD_TYPE=Release
In the attachment:
- main.cpp with the c++ test
- Client.java with java test (from code-samples)
- cppIMapPerformance.txt with columns
messageSize, setDurationInMicroSeconds, setThrouput, getDurationInMicroseconds, getThroughput - javaIMapPerformance.txt the same for java, but with Milliseconds resolution
- JavaIMapErrOutput.txt output from error-stream during the test run.
cppIMapPerformance.txt
javaIMapErrOutput.txt
javaIMapPerformance.txt
Client_java.txt
main_cpp.txt