Skip to content

Commit

Permalink
JIT: use blend rather then repair for profile inconsistencies
Browse files Browse the repository at this point in the history
If we have a partial profile then the current count reconstruction will
adjust the exit likelihood of some loop exit when it hits a capped loop.
But for multiple exit loops we might wish to see some profile flow out
of all exits, not just one.

In `ludcmp` we choose to send all the profile weights down an early return
path, leaving the bulk of the method with zero counts.

Instead of trying increasingly elaborate repair schemes, we will now use
blend mode for these sorts of problems; this gives a more balanced count
redistribution.

I also updated blend to use the same logic as repair if a block has zero
weights, since presumably whatever likelihood was assigned there during
reconstruction is not well supported.

Fixes the `ludcmp` regression with PGO over no PGO, noted in
dotnet#84264 (comment)
  • Loading branch information
AndyAyersMS committed Apr 21, 2023
1 parent 02ddff3 commit 6244f7f
Show file tree
Hide file tree
Showing 2 changed files with 15 additions and 9 deletions.
6 changes: 3 additions & 3 deletions src/coreclr/jit/fgprofile.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -2697,12 +2697,12 @@ PhaseStatus Compiler::fgIncorporateProfileData()
// encountered major issues. This is perhaps too drastic. Consider
// at least keeping the class profile data, or perhaps enable full synthesis.
//
// If profile incorporation hit fixable problems, run synthesis in repair mode.
// If profile incorporation hit fixable problems, run synthesis in blend mode.
//
if (fgPgoHaveWeights && !dataIsGood)
{
JITDUMP("\nIncorporated count data had inconsistencies; repairing profile...\n");
ProfileSynthesis::Run(this, ProfileSynthesisOption::RepairLikelihoods);
JITDUMP("\nIncorporated count data had inconsistencies; blending profile...\n");
ProfileSynthesis::Run(this, ProfileSynthesisOption::BlendLikelihoods);
}
}

Expand Down
18 changes: 12 additions & 6 deletions src/coreclr/jit/fgprofilesynthesis.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -620,9 +620,12 @@ void ProfileSynthesis::BlendLikelihoods()
case BBJ_COND:
case BBJ_SWITCH:
{
// Capture the existing weights and assign new
// weights based on heuristics.
weight_t sum = SumOutgoingLikelihoods(block, &likelihoods);
// Capture the existing weights and assign new likelihoods based on synthesis.
//
weight_t const sum = SumOutgoingLikelihoods(block, &likelihoods);
bool const unlikely = Compiler::fgProfileWeightsEqual(sum, 0.0, epsilon);
bool const consistent = Compiler::fgProfileWeightsEqual(sum, 1.0, epsilon);
bool const zero = Compiler::fgProfileWeightsEqual(block->bbWeight, 0.0, epsilon);

if (block->bbJumpKind == BBJ_COND)
{
Expand All @@ -633,10 +636,12 @@ void ProfileSynthesis::BlendLikelihoods()
AssignLikelihoodSwitch(block);
}

if (Compiler::fgProfileWeightsEqual(sum, 0.0, epsilon))
if (unlikely || zero)
{
// Existing likelihood was zero. Go with the synthesized likelihoods.
JITDUMP("Existing likelihoods in " FMT_BB " were zero, synthesizing new ones\n", block->bbNum);
// Existing likelihood was zero, or profile weight was zero. Just use synthesis likelihoods.
//
JITDUMP("%s in " FMT_BB " was zero, using synthesized likelihoods\n",
unlikely ? "Existing likelihood" : "Block weight", block->bbNum);
break;
}

Expand All @@ -645,6 +650,7 @@ void ProfileSynthesis::BlendLikelihoods()
if (!Compiler::fgProfileWeightsEqual(sum, 1.0, epsilon))
{
// Existing likelihood was too low or too high. Scale.
//
weight_t scale = 1.0 / sum;
JITDUMP("Scaling old likelihoods in " FMT_BB " by " FMT_WT "\n", block->bbNum, scale);
for (iter = likelihoods.begin(); iter != likelihoods.end(); iter++)
Expand Down

0 comments on commit 6244f7f

Please sign in to comment.