Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix slow execution when many breakpoints are used #14953

Merged
merged 8 commits into from
May 23, 2023

Conversation

nohwnd
Copy link
Contributor

@nohwnd nohwnd commented Mar 7, 2021

PR Summary

PR Context

In Pester we use breakpoints in CodeCoverage and can set thousands of them. This makes execution of scripts really slow. This is because on every sequence point, every breakpoint is inspected to see if it should be bound. This PR uses dictionaries to split breakpoints by path, and by sequence point index, to make the lookup fast.

PR Checklist

@nohwnd nohwnd requested a review from daxian-dbw as a code owner March 7, 2021 12:57
@ghost ghost assigned rjmholt Mar 7, 2021
@@ -482,9 +482,6 @@ internal bool TrySetBreakpoint(string scriptFile, FunctionContext functionContex
{
Diagnostics.Assert(SequencePointIndex == -1, "shouldn't be trying to set on a pending breakpoint");

if (!scriptFile.Equals(this.Script, StringComparison.OrdinalIgnoreCase))
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is only called from a single place after we took them out from collection that linked to the current file. So we are sure those belong to the current file. So this is just unnecessary overhead.

{
if (item.IsScriptBreakpoint && item.Script.Equals(functionContext._file, StringComparison.OrdinalIgnoreCase))
if (dictionary.Count > 0)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure why IsScriptBreakpoint was checked here, but it was not re-checked anywhere else. SetPendingBreakpoints below is called without the list of breakpoints to set, and internally it only checks the filepath from the function context. So I skipped the check to avoid looping in case there are thousands of breakpoints in one file.


breakpoints = TriggerBreakpoints(breakpoints);
if (breakpoints.Count > 0)
if (functionContext._boundBreakpoints.TryGetValue(functionContext._currentSequencePointIndex, out var bps))
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the meat of the improvement when looking up BP, instead of looping over all in the file we get it from the map based on sequence point.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you say it is a single place where we get a performance improvement?
If so I wonder did you try to unroll the Linq?

Copy link
Contributor Author

@nohwnd nohwnd Mar 10, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not the single place where you get the improvement. The improvement is caused by:

  1. Saving the breakpoints mapped by path, and then by sequence point, because that is how the break points are queried. This avoids unnecessary looping.
  2. Not moving breakpoints into a new dictionary every time we inspect them. This avoids unnecessary array allocation. https://github.com/PowerShell/PowerShell/pull/14953/files#diff-0a4e4bd42dcf35b5e74e88bce4adba02f6d6f823b698647e3ee706d007b1915bL2051

}
}

_pendingBreakpoints = new ConcurrentDictionary<int, LineBreakpoint>(newPendingBreakpoints);
// Here could check if all breakpoints for the current functionContext were bound, but because there is no atomic
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How would you want this solved? This should happen rarely so I might lock here. Or just keep it as is and don't clean up the dictionary of files.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't fully understand the problem we're trying to solve here, but if you/@PaulHigin is able to explain it to me, I can try to weigh in

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is pending breakpoints:

_pendingBreakpoints = new ConcurrentDictionary<string, ConcurrentDictionary<int, LineBreakpoint>>()

Pending breakpoints is a dictionary keyed by filepath, that contains a dictionary keyed by sequence point. When all pending breakpoints were bound, it would be nice to remove the key from the _pendingBreakpoints dictionary.

Something like this:

if (_pendingBreakpoints.TryGetValue(currentScriptFile, out var bpsInThisScript) && bpsInThisScript.IsEmpty) {
    _pendingBreakpoints.TryRemoveValue(currentScriptFile, out _);
}

Unfortunately the first line is not atomic, so there is a race condition between the first line and the second. If someone added breakpoint right after we checked the count, in theory we could lose breakpoints.

This seems like a rare condition and can be solved in few ways.

What I did here is that I simply leave the key in the dictionary. This means 1 extra string + empty concurrent dictionary is left in memory, for each file that had breakpoints. I am guessing there are rarely more than 100 distinct files with breakpoints per powershell session, so this seems okay-ish. But still dirty.

The race condition seems very rare, and simply checking if we removed a dictionary in which an item was added and they try to merge it back in might reduce the possibility of removing a breakpoint even further. We would add another race condition, but the possibility of timing both of them exactly right seems infinitely small. Something like this:

if (_pendingBreakpoints.TryGetValue(currentScriptFile, out var bpsInThisScript) && bpsInThisScript.IsEmpty) {
    if (_pendingBreakpoints.TryRemoveValue(currentScriptFile, out var removedBps) && !removedBps.IsEmpty) {
        // someone added a breakpoint after we counted but before we removed
        // merge it back into _pendingBreakpoints
        // this would happen extremely rarely
    }
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The race condition needs to be better defined here. How is setting a breakpoint subject to a race here, via API? PowerShell scripts run on a single thread.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is why I was asking how you want this solved because I don't know enough about the interaction of pending breakpoints and code execution. Maybe there is no way that pending breakpoints collection would be changed while this code is running, because it all runs on a single thread, or maybe adding a breakpoint in vscode UI calls into the PowerShell process and sets the breakpoint from a different thread.

I just assumed it is the latter, which is why ConcurrentDictionary was used in the original code and also in the new code.

@nohwnd
Copy link
Contributor Author

nohwnd commented Mar 7, 2021

In my measurements, running all my Pester tests runs ~40s without Code Coverage and ~300s with Code Coverage, which is 7 times more. Code Coverage sets around 7k breakpoints for my codebase.

With the fix, it runs ~40s without CC and ~42s with CC, including all the overhead of setting up breakpoints, calculating and printing the coverage report, so the execution is probably <1% slower with 7000 breakpoints enabled.

@nohwnd nohwnd mentioned this pull request Mar 8, 2021
5 tasks
@iSazonov iSazonov requested a review from PaulHigin March 8, 2021 17:22
@nohwnd
Copy link
Contributor Author

nohwnd commented Mar 15, 2021

@PaulHigin Polite nudge :) Could I get a review please? This would be a huge step forward for Pester users. Code coverage performance was always a pain point.

Copy link
Collaborator

@rjmholt rjmholt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please change any instances of var where the variable type isn't on the same line to the explicit type


breakpoints = TriggerBreakpoints(breakpoints);
if (breakpoints.Count > 0)
if (functionContext._boundBreakpoints.TryGetValue(functionContext._currentSequencePointIndex, out var bps))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would change the var here

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also please rename bps to something like breakpoints

Comment on lines 1575 to 1581
if (breakpoints.Count > 0)
{
breakpoints = TriggerBreakpoints(breakpoints);
if (breakpoints.Count > 0)
{
StopOnSequencePoint(functionContext, breakpoints);
}
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know it's just a style thing and it's not your fault @nohwnd, but I don't love this double count check.

Ideally we could just put the check inside TriggerBreakpoints.

If I were really trying to make this suit my desired style, I'd make them into extension methods:

breakpoints.Trigger().StopOnSequencePoint(functionContext);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't love it either, but it prevents creating another list in TriggerBreakpoints when we call it with empty list of breakpoints. And moving the check into TriggerBreakpoints makes this code path less obvious.

I also don't love that the breakpoints variable is reused, but I went for the minimal amount of changes in this PR. If you insist on changing it I can do it. As you say it's style related. Should I make the change?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's fine as is for now

}
}

_pendingBreakpoints = new ConcurrentDictionary<int, LineBreakpoint>(newPendingBreakpoints);
// Here could check if all breakpoints for the current functionContext were bound, but because there is no atomic
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't fully understand the problem we're trying to solve here, but if you/@PaulHigin is able to explain it to me, I can try to weigh in

@PaulHigin
Copy link
Contributor

I'll try and look at this later this week. The problem with changing debugging code is that it is a interactive activity and our tests don't cover everything. So I am concerned about introducing regressions. But I should have time to look later this week.

@iSazonov
Copy link
Collaborator

The problem with changing debugging code is that it is a interactive activity and our tests don't cover everything. So I am concerned about introducing regressions

Could we add more xUnit tests? for which methods?

@PaulHigin
Copy link
Contributor

It looks like I won't be able to get to this this week. Sorry for the delay, and I'll make it a higher priority for next week.

@PaulHigin
Copy link
Contributor

PaulHigin commented Mar 22, 2021

@PowerShell/powershell-committee

Marking this for committee review, as this is a significant change to the debugging code.
This change is to improve performance when using thousands of breakpoints for script code profiling.
Note that the debugging code was not originally intended for this use, even though a number of third party tools do it.
I thought that Jason added profiling support hooks in V3.0, and created a profiling prototype for community members to pick up, but I don't know what happened after that.

Debugging is interactive and our tests don't cover many scenarios, and my main concern is regressions.
I feel this should be marked as experimental and/or to get the changes in asap so that any regression bugs can be found and fixed.

@PaulHigin PaulHigin added the Review - Committee The PR/Issue needs a review from the PowerShell Committee label Mar 22, 2021
@iSazonov
Copy link
Collaborator

I thought that Jason added profiling support hooks in V3.0, and created a profiling prototype for community members to pick up, but I don't know what happened after that.

@PaulHigin This is implemented in #13673

@SteveL-MSFT
Copy link
Member

@PowerShell/powershell-committee reviewed this, we understand that Pester may be depending on using the debugger for compatibility reasons with older PowerShell. We recommend looking at the profiling work as a means to hook into PowerShell for a future Pester. For this PR, we ask that it gets wrapped as an ExperimentalFeature and try to get this in early to verify there are no unintended side-effects.

@SteveL-MSFT SteveL-MSFT added Committee-Reviewed PS-Committee has reviewed this and made a decision and removed Review - Committee The PR/Issue needs a review from the PowerShell Committee labels Mar 24, 2021
@iSazonov
Copy link
Collaborator

I want to get understanding what tests we should add to avoid regressions?

@ghost ghost added the Review - Needed The PR is being reviewed label Apr 2, 2021
@ghost
Copy link

ghost commented Apr 2, 2021

This pull request has been automatically marked as Review Needed because it has been there has not been any activity for 7 days.
Maintainer, please provide feedback and/or mark it as Waiting on Author

@daxian-dbw daxian-dbw assigned anmenaga and unassigned rjmholt Nov 3, 2021
@ghost ghost removed the Review - Needed The PR is being reviewed label Nov 3, 2021
@ghost ghost added the Review - Needed The PR is being reviewed label Nov 10, 2021
@ghost
Copy link

ghost commented Nov 10, 2021

This pull request has been automatically marked as Review Needed because it has been there has not been any activity for 7 days.
Maintainer, please provide feedback and/or mark it as Waiting on Author

@iSazonov iSazonov mentioned this pull request Nov 10, 2021
14 tasks
@daxian-dbw daxian-dbw added the CommunityDay-Large A large PR that the PS team has identified to prioritize to review label May 15, 2023
@PaulHigin
Copy link
Contributor

@nohwnd Sorry for the long delay ... I forgot all about this PR. I reviewed these changes and feel the perf inspired changes are good and that we should take them. I'd like to get the changes in so that they can bake for a while. I am not concerned about deallocating an empty sequence point dictionary and agree with you it is not that impactful.

@ghost ghost removed the Review - Needed The PR is being reviewed label May 15, 2023
Copy link
Contributor

@PaulHigin PaulHigin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am fine with these changes. But a rebase is probably needed since this PR is quite old.

@pull-request-quantifier-deprecated

This PR has 84 quantified lines of changes. In general, a change size of upto 200 lines is ideal for the best PR experience!


Quantification details

Label      : Small
Size       : +45 -39
Percentile : 33.6%

Total files changed: 3

Change summary by file extension:
.cs : +45 -39

Change counts above are quantified counts, based on the PullRequestQuantifier customizations.

Why proper sizing of changes matters

Optimal pull request sizes drive a better predictable PR flow as they strike a
balance between between PR complexity and PR review overhead. PRs within the
optimal size (typical small, or medium sized PRs) mean:

  • Fast and predictable releases to production:
    • Optimal size changes are more likely to be reviewed faster with fewer
      iterations.
    • Similarity in low PR complexity drives similar review times.
  • Review quality is likely higher as complexity is lower:
    • Bugs are more likely to be detected.
    • Code inconsistencies are more likely to be detected.
  • Knowledge sharing is improved within the participants:
    • Small portions can be assimilated better.
  • Better engineering practices are exercised:
    • Solving big problems by dividing them in well contained, smaller problems.
    • Exercising separation of concerns within the code changes.

What can I do to optimize my changes

  • Use the PullRequestQuantifier to quantify your PR accurately
    • Create a context profile for your repo using the context generator
    • Exclude files that are not necessary to be reviewed or do not increase the review complexity. Example: Autogenerated code, docs, project IDE setting files, binaries, etc. Check out the Excluded section from your prquantifier.yaml context profile.
    • Understand your typical change complexity, drive towards the desired complexity by adjusting the label mapping in your prquantifier.yaml context profile.
    • Only use the labels that matter to you, see context specification to customize your prquantifier.yaml context profile.
  • Change your engineering behaviors
    • For PRs that fall outside of the desired spectrum, review the details and check if:
      • Your PR could be split in smaller, self-contained PRs instead
      • Your PR only solves one particular issue. (For example, don't refactor and code new features in the same PR).

How to interpret the change counts in git diff output

  • One line was added: +1 -0
  • One line was deleted: +0 -1
  • One line was modified: +1 -1 (git diff doesn't know about modified, it will
    interpret that line like one addition plus one deletion)
  • Change percentiles: Change characteristics (addition, deletion, modification)
    of this PR in relation to all other PRs within the repository.


Was this comment helpful? 👍  :ok_hand:  :thumbsdown: (Email)
Customize PullRequestQuantifier for this repository.

@daxian-dbw daxian-dbw merged commit d8decdc into PowerShell:master May 23, 2023
@daxian-dbw
Copy link
Member

Thanks @nohwnd for your contribution!

@daxian-dbw daxian-dbw added the CL-Engine Indicates that a PR should be marked as an engine change in the Change Log label May 23, 2023
@nohwnd
Copy link
Contributor Author

nohwnd commented May 23, 2023

Oh, nice. :) Thanks for getting it merged.

@ghost
Copy link

ghost commented Jun 29, 2023

🎉v7.4.0-preview.4 has been released which incorporates this pull request.:tada:

Handy links:

@andyleejordan
Copy link
Member

Something in this has broken the VS Code extension's debugger, and I'm not yet sure what. It doesn't look like the APIs we're using have changed, we're just calling SetLineBreakpoints, but that internal implementation (to add pending breakpoints) has changed.

andyleejordan added a commit that referenced this pull request Jul 28, 2023
This reverts commit d8decdc.

This commit broke the VS Code extension's debugger, and should be
reverted until such time that the root cause is found and a fix applied.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CL-Engine Indicates that a PR should be marked as an engine change in the Change Log Committee-Reviewed PS-Committee has reviewed this and made a decision CommunityDay-Large A large PR that the PS team has identified to prioritize to review Small
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants