Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update dotnet-runtime version #17472

merged 1 commit into from Jun 2, 2022


Copy link

@TravisEz13 TravisEz13 commented Jun 2, 2022

PR Summary

PowerShell 7.2 uses 6.0 not 7.0

PR Context

PR Checklist

Copy link

pull-request-quantifier bot commented Jun 2, 2022

This PR has 2 quantified lines of changes. In general, a change size of upto 200 lines is ideal for the best PR experience!

Quantification details

Label      : Extra Small
Size       : +1 -1
Percentile : 0.8%

Total files changed: 1

Change summary by file extension:
.psm1 : +1 -1

Change counts above are quantified counts, based on the PullRequestQuantifier customizations.

Why proper sizing of changes matters

Optimal pull request sizes drive a better predictable PR flow as they strike a
balance between between PR complexity and PR review overhead. PRs within the
optimal size (typical small, or medium sized PRs) mean:

  • Fast and predictable releases to production:
    • Optimal size changes are more likely to be reviewed faster with fewer
    • Similarity in low PR complexity drives similar review times.
  • Review quality is likely higher as complexity is lower:
    • Bugs are more likely to be detected.
    • Code inconsistencies are more likely to be detetcted.
  • Knowledge sharing is improved within the participants:
    • Small portions can be assimilated better.
  • Better engineering practices are exercised:
    • Solving big problems by dividing them in well contained, smaller problems.
    • Exercising separation of concerns within the code changes.

What can I do to optimize my changes

  • Use the PullRequestQuantifier to quantify your PR accurately
    • Create a context profile for your repo using the context generator
    • Exclude files that are not necessary to be reviewed or do not increase the review complexity. Example: Autogenerated code, docs, project IDE setting files, binaries, etc. Check out the Excluded section from your prquantifier.yaml context profile.
    • Understand your typical change complexity, drive towards the desired complexity by adjusting the label mapping in your prquantifier.yaml context profile.
    • Only use the labels that matter to you, see context specification to customize your prquantifier.yaml context profile.
  • Change your engineering behaviors
    • For PRs that fall outside of the desired spectrum, review the details and check if:
      • Your PR could be split in smaller, self-contained PRs instead
      • Your PR only solves one particular issue. (For example, don't refactor and code new features in the same PR).

How to interpret the change counts in git diff output

  • One line was added: +1 -0
  • One line was deleted: +0 -1
  • One line was modified: +1 -1 (git diff doesn't know about modified, it will
    interpret that line like one addition plus one deletion)
  • Change percentiles: Change characteristics (addition, deletion, modification)
    of this PR in relation to all other PRs within the repository.

Was this comment helpful? 👍  👌  👎 (Email)
Customize PullRequestQuantifier for this repository.

@TravisEz13 TravisEz13 merged commit 896466b into release/v7.2.5 Jun 2, 2022
12 of 24 checks passed
@TravisEz13 TravisEz13 deleted the TravisEz13-patch-7 branch Jun 2, 2022
@TravisEz13 TravisEz13 mentioned this pull request Jun 7, 2022
22 tasks
@adityapatwardhan adityapatwardhan added the CL-BuildPackaging label Jun 16, 2022
Copy link

msftbot bot commented Jun 21, 2022

🎉v7.2.5 has been released which incorporates this pull request.🎉

Handy links:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
CL-BuildPackaging Extra Small
None yet

Successfully merging this pull request may close these issues.

None yet

3 participants