Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize memory:Support shrinking in ConcurrentLongLongPairHashMap #3061

Merged
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@ public class ConcurrentLongLongPairHashMap {
private static final long ValueNotFound = -1L;

private static final float MapFillFactor = 0.66f;
private static final float MapIdleFactor = 0.25f;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should be cautious in avoiding constantly flickering between shrink & expand. We should try to use a smaller threshold here to limit that. Maybe 0.15?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK


private static final int DefaultExpectedItems = 256;
private static final int DefaultConcurrencyLevel = 16;
Expand Down Expand Up @@ -228,14 +229,16 @@ private static final class Section extends StampedLock {
private volatile int capacity;
private volatile int size;
private int usedBuckets;
private int resizeThreshold;
private int resizeThresholdUp;
private int resizeThresholdBelow;

Section(int capacity) {
this.capacity = alignToPowerOfTwo(capacity);
this.table = new long[4 * this.capacity];
this.size = 0;
this.usedBuckets = 0;
this.resizeThreshold = (int) (this.capacity * MapFillFactor);
this.resizeThresholdUp = (int) (this.capacity * MapFillFactor);
this.resizeThresholdBelow = (int) (this.capacity * MapIdleFactor);
Arrays.fill(table, EmptyKey);
}

Expand Down Expand Up @@ -336,9 +339,10 @@ boolean put(long key1, long key2, long value1, long value2, int keyHash, boolean
bucket = (bucket + 4) & (table.length - 1);
}
} finally {
if (usedBuckets > resizeThreshold) {
if (usedBuckets > resizeThresholdUp) {
try {
rehash();
// Expand the hashmap
rehash(capacity * 2);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The existing "double the size" strategy was probably not the best one for all use cases either, as it can end up wasting a considerable amount of memory in empty buckets.

We could leave it configurable (both the step up and and down), to accommodate different needs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@merlimat like this?:
float upFactor = 0.5;
float downFactor = 0.5;

// Expand the hashmap
rehash(capacity * (1 + upFactor));

// shrink the hashmap
rehash(capacity * (1-downFactor));

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also we could make the thresholds and step up/down factor configurable.

What does the thresholds mean here?
Are you referring to the MapFillFactor and MapIdleFactor variables here?
I agree to support the configuration of these two variables. @merlimat

} finally {
unlockWrite(stamp);
}
Expand Down Expand Up @@ -376,7 +380,16 @@ private boolean remove(long key1, long key2, long value1, long value2, int keyHa
}

} finally {
unlockWrite(stamp);
if (size < resizeThresholdBelow) {
eolivelli marked this conversation as resolved.
Show resolved Hide resolved
try {
// shrink the hashmap
rehash(capacity / 2);
} finally {
unlockWrite(stamp);
}
} else {
unlockWrite(stamp);
}
}
}

Expand All @@ -388,6 +401,18 @@ private void cleanBucket(int bucket) {
table[bucket + 2] = ValueNotFound;
table[bucket + 3] = ValueNotFound;
--usedBuckets;

// Reduce unnecessary rehash
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// Reduce unnecessary rehash
// Cleanup all the buckets that were in `DeletedKey` state, so that we can reduce unnecessary expansions

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK

bucket = (bucket - 4) & (table.length - 1);
while (table[bucket] == DeletedKey) {
table[bucket] = EmptyKey;
table[bucket + 1] = EmptyKey;
table[bucket + 2] = ValueNotFound;
table[bucket + 3] = ValueNotFound;
--usedBuckets;

bucket = (bucket - 4) & (table.length - 1);
}
} else {
table[bucket] = DeletedKey;
table[bucket + 1] = DeletedKey;
Expand Down Expand Up @@ -453,9 +478,7 @@ public void forEach(BiConsumerLongPair processor) {
}
}

private void rehash() {
// Expand the hashmap
int newCapacity = capacity * 2;
private void rehash(int newCapacity) {
long[] newTable = new long[4 * newCapacity];
Arrays.fill(newTable, EmptyKey);

Expand All @@ -475,7 +498,8 @@ private void rehash() {
// Capacity needs to be updated after the values, so that we won't see
// a capacity value bigger than the actual array size
capacity = newCapacity;
resizeThreshold = (int) (capacity * MapFillFactor);
resizeThresholdUp = (int) (capacity * MapFillFactor);
resizeThresholdBelow = (int) (capacity * MapIdleFactor);
}

private static void insertKeyValueNoLock(long[] table, int capacity, long key1, long key2, long value1,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -113,6 +113,22 @@ public void testRemove() {
assertTrue(map.isEmpty());
}

@Test
public void testExpandAndShrink() {
ConcurrentLongLongPairHashMap map = new ConcurrentLongLongPairHashMap(2, 1);
assertTrue(map.put(1, 1, 11, 11));
assertTrue(map.put(2, 2, 22, 22));
assertTrue(map.put(3, 3, 33, 33));

// expand hashmap
assertTrue(map.capacity() == 8);

assertTrue(map.remove(1, 1, 11, 11));
eolivelli marked this conversation as resolved.
Show resolved Hide resolved
assertFalse(map.remove(2, 2, 22, 22));
// shrink hashmap
assertTrue(map.capacity() == 4);
eolivelli marked this conversation as resolved.
Show resolved Hide resolved
}

@Test
public void testNegativeUsedBucketCount() {
ConcurrentLongLongPairHashMap map = new ConcurrentLongLongPairHashMap(16, 1);
Expand Down