Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

8267703: runtime/cds/appcds/cacheObject/HeapFragmentationTest.java crashed with OutOfMemory #4225

Closed
wants to merge 3 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
5 changes: 5 additions & 0 deletions src/hotspot/share/gc/g1/g1ConcurrentMark.cpp
Expand Up @@ -680,6 +680,11 @@ void G1ConcurrentMark::cleanup_for_next_mark() {

void G1ConcurrentMark::clear_next_bitmap(WorkGang* workers) {
assert_at_safepoint_on_vm_thread();
// To avoid fragmentation the full collection requesting to clear the bitmap
// might use fewer workers than available. To ensure the bitmap is cleared
// as efficiently as possible the number of active workers are temporarily
// increased to include all currently created workers.
WithUpdatedActiveWorkers update(workers, workers->created_workers());
clear_bitmap(_next_mark_bitmap, workers, false);
}

Expand Down
11 changes: 8 additions & 3 deletions src/hotspot/share/gc/g1/g1FullCollector.cpp
Expand Up @@ -94,10 +94,15 @@ uint G1FullCollector::calc_active_workers() {
uint current_active_workers = heap->workers()->active_workers();
uint active_worker_limit = WorkerPolicy::calc_active_workers(max_worker_count, current_active_workers, 0);

// Finally consider the amount of used regions.
uint used_worker_limit = heap->num_used_regions();
assert(used_worker_limit > 0, "Should never have zero used regions.");

// Update active workers to the lower of the limits.
uint worker_count = MIN2(heap_waste_worker_limit, active_worker_limit);
log_debug(gc, task)("Requesting %u active workers for full compaction (waste limited workers: %u, adaptive workers: %u)",
worker_count, heap_waste_worker_limit, active_worker_limit);
uint worker_count = MIN3(heap_waste_worker_limit, active_worker_limit, used_worker_limit);
log_debug(gc, task)("Requesting %u active workers for full compaction (waste limited workers: %u, "
"adaptive workers: %u, used limited workers: %u)",
worker_count, heap_waste_worker_limit, active_worker_limit, used_worker_limit);
worker_count = heap->workers()->update_active_workers(worker_count);
log_info(gc, task)("Using %u workers of %u for full compaction", worker_count, max_worker_count);
Comment on lines +105 to 107
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's pre-existing, but this will change the number of active workers for the rest of the garbage collection. That made some sense previously as G1FullCollector::calc_active_workers() typically was very aggressive, but now it may limit other phases a bit, particularly marking which distributes on a per-reference basis.
Overall it might not make much difference though as we are talking about the very little occupied heap case.
I.e. some rough per-full gc phase might be better and might be derived easily too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was one of the reasons I went with using "just used regions" and skipping the part that each worker will handle a set of regions. In most cases looking at used regions will not limit the workers much, and if it does we don't have much work to do. I've done some benchmarking and not seen any significant regressions with this patch. The biggest problem was not using enough workers for the bitmap-work.

Calculating workers per phase might be a good improvement to consider, but that would require some more refactoring.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay. If you think it would take too long, please file an issue for per-phase thread sizing in parallel gc then (if there isn't).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll file an RFE.


Expand Down