Improve performance of ConcurrentReferenceHashMap creation #2051
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
I've discovered that on some workloads instantiation of CCRHM takes significant amount of time (e.g. loading interface-base projections in Spring Data JPA). Here's the patch that reduces map instantiation time:
Benchmark is very simple:
Key ideas behind the changes:
initialSize
is the same at each loop iteration in constructor ofConcurrentReferenceHashMap
, so expression1 << calculateShift(initialCapacity, MAXIMUM_SEGMENT_SIZE);
can be hoisted out of constructor ofSegment
and then out of the loopresizeThreshold
calculationSegment.references
declared asvolatile
so I suspect when new segment is written intoConcurrentReferenceHashMap.segments
array the fieldsegments
must be loaded at each iteration due to HB semantics, same forthis.segments.length
. To avoid this I propose to populateConcurrentReferenceHashMap.segments
as local var and the write it to the field. Event if my assumption about HB is wrong reduction of interaction with fields helps us in interpreter mode.