Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
8278824: Uneven work distribution when scanning heap roots in G1
Reviewed-by: phh
Backport-of: b4b0328d62d9a9646f2822c361e41001bf0d4aa0
  • Loading branch information
William Kemper authored and Paul Hohensee committed Jan 6, 2022
1 parent de2e289 commit 3b5fc8c
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions src/hotspot/share/gc/g1/g1RemSet.cpp
Expand Up @@ -107,15 +107,15 @@ class G1RemSetScanState : public CHeapObj<mtGC> {
// within a region to claim. Dependent on the region size as proxy for the heap
// size, we limit the total number of chunks to limit memory usage and maintenance
// effort of that table vs. granularity of distributing scanning work.
// Testing showed that 8 for 1M/2M region, 16 for 4M/8M regions, 32 for 16/32M regions
// Testing showed that 64 for 1M/2M region, 128 for 4M/8M regions, 256 for 16/32M regions
// seems to be such a good trade-off.
static uint get_chunks_per_region(uint log_region_size) {
// Limit the expected input values to current known possible values of the
// (log) region size. Adjust as necessary after testing if changing the permissible
// values for region size.
assert(log_region_size >= 20 && log_region_size <= 25,
"expected value in [20,25], but got %u", log_region_size);
return 1u << (log_region_size / 2 - 7);
return 1u << (log_region_size / 2 - 4);
}

uint _scan_chunks_per_region; // Number of chunks per region.
Expand Down

1 comment on commit 3b5fc8c

@openjdk-notifier
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please sign in to comment.