Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

8268163: Change the order of fallback full GCs in G1 #4342

Closed
wants to merge 1 commit into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
@@ -1095,6 +1095,17 @@ void G1CollectedHeap::do_full_collection(bool clear_all_soft_refs) {
do_maximum_compaction);
}

bool G1CollectedHeap::upgrade_to_full_collection() {
log_info(gc, ergo)("Attempting full compaction clearing soft references");
bool success = do_full_collection(false /* explicit gc */,
true /* clear_all_soft_refs */,
false /* do_maximum_compaction */);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this have do_maximum_compaction == true? That's what the change to VM_G1CollectForAllocation expects. In which case, do we really need separate arguments for do_full_collection for clear_all_soft_refs and do_maximum_compaction or could they be collapsed to one maximum_compaction argument?

Copy link
Contributor Author

@kstefanj kstefanj Jun 7, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is currently the only case where we need separate arguments for the two. So previously the "upgraded" Full collections used clear_all_soft_refs == true and do_maximum_compaction == false, so I kept it that way. In some sense I think this is fine, when upgrading this is the first time we try a Full GC so leaving dead wood (maximum==false) could still find some space. It is only when we attempt a second Full collection in a row we set do_maximum_compaction == true. Makes sense, or do you think we should force a "maximum compaction" when upgrading?

Copy link
Contributor

@tschatzl tschatzl Jun 8, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fwiw, I am good with either, i.e. do a maximally compacting gc or not; both will likely free some space but the non-maximally compacting should be faster. Maybe this change should just remove the third full gc, and in a separate change think about the upgrade policy. I already filed https://bugs.openjdk.java.net/browse/JDK-8268393 for such work.

Copy link
Contributor Author

@kstefanj kstefanj Jun 8, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At least with this change we will never do 3 consecutive Full GCs in the same VM operation.

// do_full_collection only fails if blocked by GC locker and that can't
// be the case here since we only call this when already completed one gc.
assert(success, "invariant");
Copy link
Contributor

@tschatzl tschatzl Jun 8, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

... this when we already completed a gc.

return success;
}

void G1CollectedHeap::resize_heap_if_necessary() {
assert_at_safepoint_on_vm_thread();

@@ -1112,7 +1123,7 @@ void G1CollectedHeap::resize_heap_if_necessary() {

HeapWord* G1CollectedHeap::satisfy_failed_allocation_helper(size_t word_size,
bool do_gc,
bool clear_all_soft_refs,
bool maximum_compaction,
bool expect_null_mutator_alloc_region,
bool* gc_succeeded) {
*gc_succeeded = true;
@@ -1134,13 +1145,17 @@ HeapWord* G1CollectedHeap::satisfy_failed_allocation_helper(size_t word_size,
}

if (do_gc) {
// When clear_all_soft_refs is set we want to do a maximum compaction
// not leaving any dead wood.
bool do_maximum_compaction = clear_all_soft_refs;
// Expansion didn't work, we'll try to do a Full GC.
// If maximum_compaction is set we clear all soft references and don't
// allow any dead wood to be left on the heap.
if (maximum_compaction) {
log_info(gc, ergo)("Attempting maximum full compaction clearing soft references");
} else {
log_info(gc, ergo)("Attempting full compaction");
}
*gc_succeeded = do_full_collection(false, /* explicit_gc */
clear_all_soft_refs,
do_maximum_compaction);
maximum_compaction /* clear_all_soft_refs */ ,
maximum_compaction /* do_maximum_compaction */);
}

return NULL;
@@ -1154,7 +1169,7 @@ HeapWord* G1CollectedHeap::satisfy_failed_allocation(size_t word_size,
HeapWord* result =
satisfy_failed_allocation_helper(word_size,
true, /* do_gc */
false, /* clear_all_soft_refs */
false, /* maximum_collection */
false, /* expect_null_mutator_alloc_region */
succeeded);

@@ -1165,7 +1180,7 @@ HeapWord* G1CollectedHeap::satisfy_failed_allocation(size_t word_size,
// Attempts to allocate followed by Full GC that will collect all soft references.
result = satisfy_failed_allocation_helper(word_size,
true, /* do_gc */
true, /* clear_all_soft_refs */
true, /* maximum_collection */
true, /* expect_null_mutator_alloc_region */
succeeded);

@@ -1176,7 +1191,7 @@ HeapWord* G1CollectedHeap::satisfy_failed_allocation(size_t word_size,
// Attempts to allocate, no GC
result = satisfy_failed_allocation_helper(word_size,
false, /* do_gc */
false, /* clear_all_soft_refs */
false, /* maximum_collection */
true, /* expect_null_mutator_alloc_region */
succeeded);

@@ -2834,15 +2849,6 @@ bool G1CollectedHeap::do_collection_pause_at_safepoint(double target_pause_time_
}

do_collection_pause_at_safepoint_helper(target_pause_time_ms);
if (should_upgrade_to_full_gc(gc_cause())) {
log_info(gc, ergo)("Attempting maximally compacting collection");
bool result = do_full_collection(false /* explicit gc */,
true /* clear_all_soft_refs */,
false /* do_maximum_compaction */);
// do_full_collection only fails if blocked by GC locker, but
// we've already checked for that above.
assert(result, "invariant");
}
return true;
}

@@ -516,6 +516,9 @@ class G1CollectedHeap : public CollectedHeap {
// Callback from VM_G1CollectFull operation, or collect_as_vm_thread.
virtual void do_full_collection(bool clear_all_soft_refs);

// Helper to do a full collection that clears soft references.
bool upgrade_to_full_collection();

// Callback from VM_G1CollectForAllocation operation.
// This function does everything necessary/possible to satisfy a
// failed allocation request (including collection, expansion, etc.)
@@ -534,7 +537,7 @@ class G1CollectedHeap : public CollectedHeap {
// Helper method for satisfy_failed_allocation()
HeapWord* satisfy_failed_allocation_helper(size_t word_size,
bool do_gc,
bool clear_all_soft_refs,
bool maximum_compaction,
bool expect_null_mutator_alloc_region,
bool* gc_succeeded);

@@ -94,14 +94,16 @@ void VM_G1TryInitiateConcMark::doit() {
// request will be remembered for a later partial collection, even though
// we've rejected this request.
_whitebox_attached = true;
} else if (g1h->do_collection_pause_at_safepoint(_target_pause_time_ms)) {
_gc_succeeded = true;
} else {
} else if (!g1h->do_collection_pause_at_safepoint(_target_pause_time_ms)) {
// Failure to perform the collection at all occurs because GCLocker is
// active, and we have the bad luck to be the collection request that
// makes a later _gc_locker collection needed. (Else we would have hit
// the GCLocker check in the prologue.)
_transient_failure = true;
} else if (g1h->should_upgrade_to_full_gc(_gc_cause)) {
Copy link
Contributor

@tschatzl tschatzl Jun 8, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there is a pre-existing bug here: should_upgrade_to_full_gc always returns false here because the called should_do_concurrent_full_gc() used there (and returning true) is always true for VM_G1TryInitiateConcMark.

Afaict this has been introduced in (https://bugs.openjdk.java.net/browse/JDK-8232588) where VM_G1CollectForAllocation has been split into VM_G1TryInitiateConcMark and VM_G1CollectForAllocation, and should_upgrade_to_full_gc() not updated.

Tote that JDK-8232588 suggests to not upgrade for a System.gc call with -XX:+ExplicitGCInvokesConcurrent, but strangely adds the code to do so. Later, JDK-8232588, due to some refactoring error, disables the upgrade again.

I'll file a bug to investigate this (mess).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At the time of JDK-8232588 the should_do_concurrent_full_gc() check was preceded by a check of policy()->force_upgrade_to_full() that could make the full GC happen. That seems to have disappeared.

Copy link
Contributor Author

@kstefanj kstefanj Jun 9, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that was removed when we removed the support for old-gen och alternative memory device. I think having a bug/enhancement to investigate how to improve this is good.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The by-policy upgrade was removed by JDK-8256181: Remove Allocation of old generation on alternate memory devices functionality.

Copy link
Contributor Author

@kstefanj kstefanj Jun 9, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly, and we could probably remove this check, the reason for having it was to change as little functionality as possible at this point.

_gc_succeeded = g1h->upgrade_to_full_collection();
} else {
_gc_succeeded = true;
}
}

@@ -138,10 +140,17 @@ void VM_G1CollectForAllocation::doit() {
// Try a partial collection of some kind.
_gc_succeeded = g1h->do_collection_pause_at_safepoint(_target_pause_time_ms);

if (_gc_succeeded && (_word_size > 0)) {
// An allocation had been requested. Do it, eventually trying a stronger
// kind of GC.
_result = g1h->satisfy_failed_allocation(_word_size, &_gc_succeeded);
if (_gc_succeeded) {
if (_word_size > 0) {
// An allocation had been requested. Do it, eventually trying a stronger
// kind of GC.
_result = g1h->satisfy_failed_allocation(_word_size, &_gc_succeeded);
} else if (g1h->should_upgrade_to_full_gc(_gc_cause)) {
// There has been a request to perform a GC to free some space. We have no
// information on how much memory has been asked for. In case there are
// absolutely no regions left to allocate into, do a full compaction.
_gc_succeeded = g1h->upgrade_to_full_collection();
}
}
}