Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Too big memory overhead #10

Closed
hazelcast opened this issue Mar 23, 2012 · 1 comment
Closed

Too big memory overhead #10

hazelcast opened this issue Mar 23, 2012 · 1 comment
Labels
Source: Internal PR or issue was opened by an employee Type: Enhancement

Comments

@hazelcast
Copy link
Collaborator

What steps will reproduce the problem?
Run the following program:

 IMap<Integer, byte[]> map = Hazelcast.getMap("test");
 System.gc();
 long initialMemoryUsage = Runtime.getRuntime().totalMemory() -

Runtime.getRuntime().freeMemory();
System.out.println(initialMemoryUsage);

 for (int i = 0 ; i< RECORDS; i++){
   map.put(i, new byte[RECORD_SIZE]);
 }

 System.gc();
 long memoryUsage = Runtime.getRuntime().totalMemory() -

Runtime.getRuntime().freeMemory() - initialMemoryUsage;
System.out.println(memoryUsage);
long usagePerRecord = memoryUsage / RECORDS;
System.out.println("Memory usage per record is "+ usagePerRecord+ "
bytes");

What is the expected output? What do you see instead?
for RECORD_SIZE=0 output is "Memory usage per record is 952 bytes".
That's too big value. There should be a possibility to define map, for
which memory usage doesn't exceed 100 B/record.

What version of the product are you using? On what operating system?
1.8.4-SNAPSHOT (14.04.2010)
Linux x86_64
Java Sun 1.6.0_19 x86_64

Please provide any additional information below.
After commenting out line 1009 in CMap.java (turning off values indexing):
//updateIndexes(record);
memory usage falls down to about 680 B/record, but that's still too much.

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=255


earlier comments

wojciech.durczynski@comarch.com said, at 2010-04-15T09:36:44.000Z:

For a test I created a map wrapper, which groups map entries in a buckets. For 700000 records and 10000 buckets memory usage falls to about 60B/record. But sadly put and remove operations are much slower because of transactions (buckets locking).

oztalip said, at 2010-05-01T23:25:02.000Z:

With the latest updates we are not down from 952 to 411 bytes! not enough though. We will keep working on the memory cost.

@mdogan
Copy link
Contributor

mdogan commented Aug 21, 2013

Overhead reduced significantly in 3.0

@mdogan mdogan closed this as completed Aug 21, 2013
PetroSemeniuk pushed a commit to PetroSemeniuk/hazelcast that referenced this issue Apr 8, 2015
…ckup-operations

STASHDEV-7855 Count the operation that triggered a Backup, for finer measurement of what's causing all our remote operations.
ahmetmircik referenced this issue in ahmetmircik/hazelcast Jun 8, 2015
session-replication exampe added
@mmedenjak mmedenjak added the Source: Internal PR or issue was opened by an employee label Jan 28, 2020
SeriyBg referenced this issue in SeriyBg/hazelcast Jul 9, 2021
SeriyBg referenced this issue in SeriyBg/hazelcast Jul 9, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Source: Internal PR or issue was opened by an employee Type: Enhancement
Projects
None yet
Development

No branches or pull requests

2 participants