Skip to content

Commit

Permalink
ConcurrentGC Heap/Tenure Expand Fix
Browse files Browse the repository at this point in the history
Fixes eclipse-openj9/openj9#13206

This change switches the order between updating INIT table (tuneToHeap
call) and setting mark bits in place. This way, we guarantee that MM
bits are set in place when we miss to update INIT table (or it's not
updated in time) given overlap of KO and Tenure Expand.

The issue being resolved is similar to what has been outlined in a
previous issue
[eclipse-openj9/openj9#8020 (comment)],
where Mark Map Bits get missed for init. Here, we have another similar
timing hole, when one thread kicks off concurrent, Concurrent_OFF ->
(Concurrent_INIT or INIT Complete), while the expanding thread is in
middle of heapReconfigured.

With the original ordering of heapReconfigured, we first attempt to set
the mark bits in place when Concurrent is ON. With Concurrent_OFF, we
delay setting the bits until Concurrent_INIT. This requires update to
the init table. Hence, for Concurrent_OFF we forgo setting bits and
update the init table, we will expect `tuneToHeap` to do
`determineInitRanges` to update init ranges table. The issue occurs when
we don't set mark bits in place (given concurrent_OFF) but concurrent
starts after the check and prior to updating the init ranges. Here,we
either don't update init table (since init is in progress) or miss to
update it in time.

With these changes of reordering heapReonfig, we will try to set mark
bits in place only after check to update init range table. With this,
any state transitions resulting in init table not being updated (or
being updated to late) will be caught and accounted for as bits will be
set after.

Signed-off-by: Salman Rana <salman.rana@ibm.com>
  • Loading branch information
RSalman committed Aug 11, 2021
1 parent 9ebd8f0 commit 02187e1
Show file tree
Hide file tree
Showing 2 changed files with 21 additions and 17 deletions.
31 changes: 15 additions & 16 deletions gc/base/standard/ConcurrentGC.cpp
Expand Up @@ -3366,8 +3366,6 @@ MM_ConcurrentGC::heapRemoveRange(MM_EnvironmentBase *env, MM_MemorySubSpace *sub
void
MM_ConcurrentGC::heapReconfigured(MM_EnvironmentBase *env, HeapReconfigReason reason, MM_MemorySubSpace *subspace, void *lowAddress, void *highAddress)
{
Assert_MM_true(reason != HEAP_RECONFIG_NONE);

/* _rebuildInitWorkForAdd/_rebuildInitWorkForRemove are not sufficient for determining on how to proceed with headReconfigured
* We end up here in the following scenarios:
* 1) During heap expand - after heapAddRange
Expand All @@ -3381,22 +3379,10 @@ MM_ConcurrentGC::heapReconfigured(MM_EnvironmentBase *env, HeapReconfigReason re
* Similarly, we can end up here after scavenger tilt with any of the flags set.
*/

if ((lowAddress != NULL) && (highAddress != NULL)) {
Assert_MM_true(_rebuildInitWorkForAdd && (reason == HEAP_RECONFIG_EXPAND));
/* If we are within a concurrent cycle we need to initialize the mark bits
* for new region of heap now
*/
if (CONCURRENT_OFF < _stats.getExecutionMode()) {
/* If subspace is concurrently collectible then clear bits otherwise
* set the bits on to stop tracing INTO this area during concurrent
* mark cycle.
*/
_markingScheme->setMarkBitsInRange(env, lowAddress, highAddress, subspace->isConcurrentCollectable());
}
}
Assert_MM_true(reason != HEAP_RECONFIG_NONE);

/* If called outside a global collection for a heap expand/contract.. */
if((reason == HEAP_RECONFIG_CONTRACT) || (reason == HEAP_RECONFIG_EXPAND)) {
if ((reason == HEAP_RECONFIG_CONTRACT) || (reason == HEAP_RECONFIG_EXPAND)) {
Assert_MM_true(_rebuildInitWorkForAdd || _rebuildInitWorkForRemove);
if (!_stwCollectionInProgress) {
/* ... and a concurrent cycle has not yet started then we
Expand All @@ -3418,6 +3404,19 @@ MM_ConcurrentGC::heapReconfigured(MM_EnvironmentBase *env, HeapReconfigReason re
}
}

if ((lowAddress != NULL) && (highAddress != NULL)) {
Assert_MM_true(reason == HEAP_RECONFIG_EXPAND);
/* If we are within a concurrent cycle we need to initialize the mark bits
* for new region of heap now
*/
if (CONCURRENT_OFF < _stats.getExecutionMode()) {
/* If subspace is concurrently collectible then clear bits otherwise
* set the bits on to stop tracing INTO this area during concurrent
* mark cycle.
*/
_markingScheme->setMarkBitsInRange(env, lowAddress, highAddress, subspace->isConcurrentCollectable());
}
}

/* Expand any superclass structures */
MM_ParallelGlobalGC::heapReconfigured(env, reason, subspace, lowAddress, highAddress);
Expand Down
7 changes: 6 additions & 1 deletion gc/base/standard/ConcurrentGCIncrementalUpdate.cpp
Expand Up @@ -120,7 +120,12 @@ MM_ConcurrentGCIncrementalUpdate::initialize(MM_EnvironmentBase *env)
bool
MM_ConcurrentGCIncrementalUpdate::heapAddRange(MM_EnvironmentBase *env, MM_MemorySubSpace *subspace, uintptr_t size, void *lowAddress, void *highAddress)
{
bool clearCards = (CONCURRENT_OFF < _stats.getExecutionMode()) && subspace->isConcurrentCollectable();
/* CS check is added to preemptively clean new cards during expansion when in the middle of a CS cycle, even if CM is off. CS cycle during
* expansion can be overlapped with KO, in which case we can miss to update the init table (or update it too late) when we KO after the cleanCards check.
* It it known that this will cause cards to be cleared unnecessarily during the expand given CM is off + CS. This is fine because cleaning cards
* (memset) is negligible, each 512 bytes of expanded heap will require to memset of 1 byte and CS shouldn't be expanding Tenure too frequently.
*/
bool clearCards = ((CONCURRENT_OFF < _stats.getExecutionMode()) || _extensions->isConcurrentScavengerInProgress()) && subspace->isConcurrentCollectable();

/* Expand any superclass structures including mark bits*/
bool result = MM_ConcurrentGC::heapAddRange(env, subspace, size, lowAddress, highAddress);
Expand Down

0 comments on commit 02187e1

Please sign in to comment.