Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

8267703: runtime/cds/appcds/cacheObject/HeapFragmentationTest.java crashed with OutOfMemory #4225

Closed
wants to merge 3 commits into from
Closed
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
@@ -94,10 +94,15 @@ uint G1FullCollector::calc_active_workers() {
uint current_active_workers = heap->workers()->active_workers();
uint active_worker_limit = WorkerPolicy::calc_active_workers(max_worker_count, current_active_workers, 0);

// Finally consider the amount of used regions.
uint regions_per_worker = (uint) (HeapSizePerGCThread / G1HeapRegionSize);
uint used_worker_limit = MAX2((heap->num_used_regions() / regions_per_worker), 1u);
kstefanj marked this conversation as resolved.
Show resolved Hide resolved

// Update active workers to the lower of the limits.
uint worker_count = MIN2(heap_waste_worker_limit, active_worker_limit);
log_debug(gc, task)("Requesting %u active workers for full compaction (waste limited workers: %u, adaptive workers: %u)",
worker_count, heap_waste_worker_limit, active_worker_limit);
uint worker_count = MIN3(heap_waste_worker_limit, active_worker_limit, used_worker_limit);
log_debug(gc, task)("Requesting %u active workers for full compaction (waste limited workers: %u, "
"adaptive workers: %u, used limited workers: %u)",
worker_count, heap_waste_worker_limit, active_worker_limit, used_worker_limit);
worker_count = heap->workers()->update_active_workers(worker_count);
log_info(gc, task)("Using %u workers of %u for full compaction", worker_count, max_worker_count);
Copy link
Contributor

@tschatzl tschatzl Jun 1, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's pre-existing, but this will change the number of active workers for the rest of the garbage collection. That made some sense previously as G1FullCollector::calc_active_workers() typically was very aggressive, but now it may limit other phases a bit, particularly marking which distributes on a per-reference basis.
Overall it might not make much difference though as we are talking about the very little occupied heap case.
I.e. some rough per-full gc phase might be better and might be derived easily too.

Copy link
Contributor Author

@kstefanj kstefanj Jun 1, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was one of the reasons I went with using "just used regions" and skipping the part that each worker will handle a set of regions. In most cases looking at used regions will not limit the workers much, and if it does we don't have much work to do. I've done some benchmarking and not seen any significant regressions with this patch. The biggest problem was not using enough workers for the bitmap-work.

Calculating workers per phase might be a good improvement to consider, but that would require some more refactoring.

Copy link
Contributor

@tschatzl tschatzl Jun 2, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay. If you think it would take too long, please file an issue for per-phase thread sizing in parallel gc then (if there isn't).

Copy link
Contributor Author

@kstefanj kstefanj Jun 2, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll file an RFE.