You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
for (int i = 0 ; i< RECORDS; i++){
map.put(i, new byte[RECORD_SIZE]);
}
System.gc();
long memoryUsage = Runtime.getRuntime().totalMemory() -
Runtime.getRuntime().freeMemory() - initialMemoryUsage;
System.out.println(memoryUsage);
long usagePerRecord = memoryUsage / RECORDS;
System.out.println("Memory usage per record is "+ usagePerRecord+ "
bytes");
What is the expected output? What do you see instead?
for RECORD_SIZE=0 output is "Memory usage per record is 952 bytes".
That's too big value. There should be a possibility to define map, for
which memory usage doesn't exceed 100 B/record.
What version of the product are you using? On what operating system?
1.8.4-SNAPSHOT (14.04.2010)
Linux x86_64
Java Sun 1.6.0_19 x86_64
Please provide any additional information below.
After commenting out line 1009 in CMap.java (turning off values indexing):
//updateIndexes(record);
memory usage falls down to about 680 B/record, but that's still too much.
wojciech.durczynski@comarch.com said, at 2010-04-15T09:36:44.000Z:
For a test I created a map wrapper, which groups map entries in a buckets. For 700000
records and 10000 buckets memory usage falls to about 60B/record. But sadly put and
remove operations are much slower because of transactions (buckets locking).
oztalip said, at 2010-05-01T23:25:02.000Z:
With the latest updates we are not down from 952 to 411 bytes! not enough though. We will keep working on the
memory cost.
The text was updated successfully, but these errors were encountered:
What steps will reproduce the problem?
Run the following program:
Runtime.getRuntime().freeMemory();
System.out.println(initialMemoryUsage);
Runtime.getRuntime().freeMemory() - initialMemoryUsage;
System.out.println(memoryUsage);
long usagePerRecord = memoryUsage / RECORDS;
System.out.println("Memory usage per record is "+ usagePerRecord+ "
bytes");
What is the expected output? What do you see instead?
for RECORD_SIZE=0 output is "Memory usage per record is 952 bytes".
That's too big value. There should be a possibility to define map, for
which memory usage doesn't exceed 100 B/record.
What version of the product are you using? On what operating system?
1.8.4-SNAPSHOT (14.04.2010)
Linux x86_64
Java Sun 1.6.0_19 x86_64
Please provide any additional information below.
After commenting out line 1009 in CMap.java (turning off values indexing):
//updateIndexes(record);
memory usage falls down to about 680 B/record, but that's still too much.
Migrated from http://code.google.com/p/hazelcast/issues/detail?id=255
earlier comments
wojciech.durczynski@comarch.com said, at 2010-04-15T09:36:44.000Z:
For a test I created a map wrapper, which groups map entries in a buckets. For 700000 records and 10000 buckets memory usage falls to about 60B/record. But sadly put and remove operations are much slower because of transactions (buckets locking).
oztalip said, at 2010-05-01T23:25:02.000Z:
With the latest updates we are not down from 952 to 411 bytes! not enough though. We will keep working on the memory cost.
The text was updated successfully, but these errors were encountered: