Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pattern Idea: Scoring Grid (#351) #352

Open
wants to merge 12 commits into
base: main
Choose a base branch
from
114 changes: 114 additions & 0 deletions patterns/1-initial/scoring-grid.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,114 @@
## Title

Software Project Scoring Grid
Copy link
Collaborator

@robtuley robtuley Sep 11, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This name suggests to me you are scoring code quality or similar, but in the pattern it's scoring setup to help contributors/users. The best name I could think of was "Repository Scoring Grid", or "Repository Convention Score" (like the "Repository Activity Score" pattern) but it depends what you want to convey.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rob also mentioned "scorecard" elsewhere. Does scorecard have a specific meaning in the industry? I feel like I have heart it a couple of times already, but might be wrong.

So an other alternative could be:

Repository Scorecard

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a couple of specific in-line comments but also have some general feedback: I'm unclear whether this is a pattern to help improve InnerSource repository setup (i.e. help contributors make better quality contributions), or whether it's a pattern to improve any and all aspects of engineering. I think both are valid things, but an InnerSource setup focus is probably more relevant specifically here?

I think it would benefit from a more specific focus on InnerSource setup in context/forces sections for example, where I see the example scoring grid under solutions already being InnerSource orientated.

So evidentially my scorecard comes from a place of Code Quality and Engineering Leadership - the scorecard concept was inspired by work done at a previous employer to roll out InnerSource at the team and repository level. From a Code Quality perspective, the overlap between InnerSource best practice and Community best practice (working with a team) are quite strong. I think it would make sense to rewrite the example scorecard to highlight InnerSource best practice, e.g. from the InnerSource Maturity Model pattern - I can give that a go since it would be more appropriate to the DNA of the idea.

We use this pattern in the 'InnerSource setup' context, and there are plenty of examples in our wider org in the more general engineering context (e.g. security scorecards, operational scorecards, engineering or test maturity assessments, delivery health dashboards).

Good point; I think it's worth calling out that the Scorecard pattern can be used to measure other things, as exampled by your list, and that the general pattern can be expanded if you want to promote other areas of best practice.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the word repository should be used consistently in the pattern - I was struggling between project/service/codebase - but repository ("git repo") seems like the right focal point.

As for title variations:

  • Repository Scoring Grid
  • Repository Score Grid
  • Repository Scorecard

A similar pattern common in the industry would be a Career Path Matrix or Job Levelling Matrix e.g. https://lattice.com/library/what-is-a-job-leveling-matrix

So an alternative name could be:

  • Repository Maturity Matrix

I think a scorecard is an artefact of scoring, based on the grid - so I think my original verb "Scoring" is correct because the grid is used for scoring, i.e. producing scorecards - so I think my title preference is "Repository Scoring Grid" - and the example should be refactored to focus on InnerSource best practice at the repository level based on practical things at the repository level that intersect with Maturity Model.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Naming is hard :)

Adding to the complexity that we already have a pattern called Maturity Model, which we would not want to get confused with this new pattern.

Copy link
Collaborator

@robtuley robtuley Sep 13, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A thought -- how about "Repository Scoring" ..?

It occurs to me the underlying pattern here is to guide the key stakeholders of a repository to improve in a structured measureable way by scoring, which also allows a healthy dose of reporting/gamification activity/prioritisation of effort when there exists a larger portfolio.

The fact you use a 'scoring grid' with dimensions and grades is one way of doing the scoring. There are others e.g. a scorecard I'd consider to be slightly different in that it's primary purpose is to have a single overall grade or score that is easy to communicate, and the 'card' bit means there is additional detail that tells you why so you can improve. Simple 'number of failed checks' is another common approach (e.g. linter type semantics).

So the question is really whether the pattern is the scoring grid, or whether the pattern is the scoring. If the latter then allow the grid to be one of a few different examples rather than leading the pattern in the title.

Copy link
Collaborator

@robtuley robtuley Sep 13, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when there exists a larger portfolio

On this topic, I wince a bit in this pattern as you reference "poor engineering leadership", perhaps because I have needed this pattern as an engineering leader chuckle. IMO this pattern becomes necessary (and it's use a signal of good engineering leadership) based on the size of the portfolio. As the number of repositories scales up... how do you govern any standards that you have? Well, this.

So it might well be poor engineering leadership, but it's also often a successful org in a growth stage where the growing scale requires previously unwritten rules to be formalised, or previously written rules/principles to be actually governed.


## Patlet
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Patlet is meant to allow readers to quickly scan a lot of Problem-Solution pairs, to find the things relevant for their orgs.

If we were to rewrite this Patlet as two fairly short sentences, how would that look like?
1st sentence Problem
2nd sentence Solution


Objectively improve the quality of code repositories across an organisation by introducing a Scoring Grid, which can be used to grade the state of existing code bases against InnerSource best practice.

By providing a grading system, the Scoring Grid can lay out a path that engineers can follow to develop greenfield projects into mature InnerSource communities with strong ownership through incremental improvement and rescoring.

## Problem

Organisations are littered with repositories that don't have README files, or are outdated, containing obvious errors; projects have no CI integration, no release instructions, badly defined licenses, no PR templates, and so on. Don't let perfect be the enemy of good - software projects get started with little to no thought about their long term maintenance, and a standard plan to refine these projects and make them good is needed for the sanity and health of the engineering teams.

A *Software Project Scoring Grid*, as presented in this pattern, provides a guided pathway from "new project" to a "mature, well maintained, inner source project". By grading areas such as Readme, Contribution Guidelines, Licensing, Pull Requests, and Continuous Integration; a scoring grid can objectively highlight positive and negative aspects of a project, and provide tangible guidance to engineering teams on how to make their repositories more InnerSource friendly.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You reference "new project" here, and "greenfield" in L9. In our experience this pattern is just as valuable when transitioning a mature, well maintained project from closed source -> inner source when suddenly a bunch of new things become important so might be worth generalising the problem.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
A *Software Project Scoring Grid*, as presented in this pattern, provides a guided pathway from "new project" to a "mature, well maintained, inner source project". By grading areas such as Readme, Contribution Guidelines, Licensing, Pull Requests, and Continuous Integration; a scoring grid can objectively highlight positive and negative aspects of a project, and provide tangible guidance to engineering teams on how to make their repositories more InnerSource friendly.
A *Software Project Scoring Grid*, as presented in this pattern, provides a guided pathway from "new project" to a "mature, well maintained, InnerSource project". By grading areas such as Readme, Contribution Guidelines, Licensing, Pull Requests, and Continuous Integration; a scoring grid can objectively highlight positive and negative aspects of a project, and provide tangible guidance to engineering teams on how to make their repositories more InnerSource friendly.

spelling InnerSource

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Besides the spelling fix, this paragraph looks like it belongs into the Solution section rather than to Problem?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, and infact I'm applying the scoring to a mature 10 year old repository at the moment - so maybe "new project" can be reworded to "new, poorly maintained, or stale project".


## Story

Company A, well established engineering department, with departmental standards - over 200 repositories used by 100 or so engineers, teams were asked to self-score on code quality - this led to an expanded definition of "What good looks like" for a software project under ownership and maintenance within the department. This expanded definition used InnerSource guidance to produce a scoring grid that InnerSource advocates could use to share and promote best practice across community members.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Love a great story :)

Suggested change
Company A, well established engineering department, with departmental standards - over 200 repositories used by 100 or so engineers, teams were asked to self-score on code quality - this led to an expanded definition of "What good looks like" for a software project under ownership and maintenance within the department. This expanded definition used InnerSource guidance to produce a scoring grid that InnerSource advocates could use to share and promote best practice across community members.
Company A has a well established engineering department, with departmental standards - over 200 repositories used by 100 or so engineers.
At some point teams were asked to self-score on code quality - this led to an expanded definition of "What good looks like" for a software project under ownership and maintenance within the department.
This expanded definition used InnerSource guidance to produce a scoring grid that InnerSource advocates could use to share and promote best practices with all community members.

I tried to shorten the first sentence. While doing so I broke up the paragraph to 1 sentence per line, for easier feedback.

What do you mean by "departmental standards"?

Also why were teams were asked to self-score? What was the driving force behind this?


After several years I left Company A to join Company B, in my first week I saw numerous and obvious errors in README files, poor code quality, weak ownership, out of date build and release information. I craved a public version of the InnerSource grid used at Company A and decided to create my own grid based on measures important to me; highlighting all the bad, and good aspects of software development as a method to inspire and encourage change within the engineering function of my new company. This grid was rapidly highlighted and shared by Engineering Managers to different teams, and became a useful conversation tool for "What does good look like?".

At Company B, we then tried to go the next step, and are trying to use automated measures outlined in our Scoring Grid to score the 600+ repositories at our organisation, by providing nightly feedback in an interactive dashboard, so that teams and engineering managers can improve the maintainability and health of their projects.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
At Company B, we then tried to go the next step, and are trying to use automated measures outlined in our Scoring Grid to score the 600+ repositories at our organisation, by providing nightly feedback in an interactive dashboard, so that teams and engineering managers can improve the maintainability and health of their projects.
At Company B, we then tried to use the measures outlined in our Scoring Grid to score the 600+ repositories at our organisation programmatically.
By providing feedback in an interactive dashboard we enable teams and engineering managers to improve the maintainability and health of their projects.

Again breaking things up into multiple sentence.
I think we could shorten this block a bit without loosing much information.


## Context

Where does the problem exist?

- An organisation that has a legacy of code repositories - assuming teams are using source control to manage code change
- An organisation where engineers are free to create new repositories, but there is no guidance in place for long term maintenance
- An individual who has built lots of projects or libaries, and is struggling to maintain and remember build and release information - they need support, but don't have the words to justify the engineering effort

What are the pre-conditions?

- A large number of poorly maintained, legacy, or "abandonned" code bases
- Teams are unhappy working on a codebase, expending lots of manual effort running complex commands, and following tribal release knowledge - i.e. they don't have highly automated, well documented, high confidence, release processes
- A measure of "mounting tech debt"

## Forces

- Poor Engineering Leadership
- No standards for "What does good look like?"
- Product management asking to go faster

It takes time to introduce a scoring grid, do the scoring, and prioritise improvements, but the benefits should be self-evident in prioritising improvements that reduce the delivery time of all future work.

Not everyone will agree with the measures used in the scoring grid, but this can be a conversation starter about "What does good look like?" leading to a shared vision - written out explicitly - which will help move teams to action.

By agreeing to and communicating a Scoring Grid; the engineering management is providing leadership and guidance to all engineers by tacitly saying "these things matter", "it's your responsibility to improve along this path".
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
By agreeing to and communicating a Scoring Grid; the engineering management is providing leadership and guidance to all engineers by tacitly saying "these things matter", "it's your responsibility to improve along this path".
By agreeing to and communicating a Scoring Grid, the engineering management is providing leadership and guidance to all engineers by tacitly saying "these things matter", "it's your responsibility to improve along this path".

grammar fix


## Solutions

Example Scoring Grid for Company B (Introduced January 2021) - should be applicable for any Open Source, Inner Source, or private repository using Git tools that support CODEOWNERS, PRs, and PR Templates. Feel free to re-grade based on your company's best practice, or add additional areas based on weak points. Example grading areas might include CODE QUALITY, TESTING, CODE COVERAGE, TEST STRATEGY.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Example Scoring Grid for Company B (Introduced January 2021) - should be applicable for any Open Source, Inner Source, or private repository using Git tools that support CODEOWNERS, PRs, and PR Templates. Feel free to re-grade based on your company's best practice, or add additional areas based on weak points. Example grading areas might include CODE QUALITY, TESTING, CODE COVERAGE, TEST STRATEGY.
Example Scoring Grid for Company B (introduced January 2021) - should be applicable for any Open Source, InnerSource, or private repository using Git tools that support CODEOWNERS, PRs, and PR Templates. Feel free to re-grade based on your company's best practice, or add additional areas based on weak points. Example grading areas might include CODE QUALITY, TESTING, CODE COVERAGE, TEST STRATEGY.

spelling InnerSource


| Area / Grade | Grade F | Grade C | Grade B | Grade A |
|:----------------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| README | No README file; An empty README; - README contains obvious and distracting errors and has become stale. | README file exists in codebase; contains basic setup and test instructions; instructions are correct and up to date. | README contains accurate and automated instructions for setup, testing, build, and deployment - no local environment setup is required. | README contains: mission, getting started guide, user documentation, who we are, our communication channels, link to contribution guide. |
| CONTRIBUTING | No CONTRIBUTING file; no clear owner for project. | CONTRIBUTING file exists and is linked from README; contains basic setup and test instructions; instructions are correct. | CONTRIBUTING contains accurate and automated instructions on how to setup a local dev environment, test, and raise PRs. | CONTRIBUTING provides structural overview of code; guidance on how to make modifications; fully automated: setup, testing, build, and deployment info; how to raise a PR, and time to expect for reviews. |
| LICENSE | No LICENSE file; obviously incorrect license info; e.g. default ISC for commercially sensitive repo; mismatch between license fields. | Appropriate LICENSE has been chosen; but the information is spread across multiple files e.g. at the headers of code files. | Appropriate LICENSE file exists; and is consistently referenced in meta data such as README or package.json file. | Appropriate LICENSE file exists; additional information is provided, and thought has been given to the dependencies; there’s a way to verify the tree of license dependencies. |
| PULL REQUESTS | No PULL REQUEST template; PR standards are defined externally to project; PR standards not applicable to this project. | PULL REQUEST template exists; Advice for creating a PR exists in README or CONTRIBUTION files. | PULL REQUEST template(s) exist for different types of changes. Main branch locked down. Named reviewers required. | PULL REQUEST standards are enforced by automated CI checks such as Danger.js; PR Templates contain checklist. CODEOWNERS lists required reviewers. |
| CODEOWNERS | No CODEOWNERS file; ownership of repo unclear, or ownership belongs to a defunct team; contact details for team are outdated or incorrect. | CODEOWNERS file exists; named individuals required for review; main branch locked down; requires at least one person to provide PR review. | CODEOWNERS file exists; named team required for review; main branch locked down requiring PR review | CODEOWNERS file exists; named team required for review, main branch locked down requiring PR review; accurate team contact details available in README. |
| CONTINUOUS INTEGRATION | No Continuous Integration setup; builds are manual, and are transferred directly from developer laptop to production environment. Main branch fails tests. | Continuous Integration exists; requires manual triggers. Process not checked into source control. No PR integration. | Continuous Integration is well integrated; PRs automatically checked and verified. Process scripted as part of code base. | Continuous Integration is fully automated; high trust test suite; features on main branch are automatically rolled out to production; use of canary builds; feature toggles. |

Apply the grid against one or more repositories; deciding on a score. This can be done indiviually using an expert, or as team as a retrospective discussion.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Apply the grid against one or more repositories; deciding on a score. This can be done indiviually using an expert, or as team as a retrospective discussion.
Apply the grid against one or more repositories; deciding on a score. This can be done individually using an expert, or as team as a retrospective discussion.

What do you mean by "individually using an expert"?


Based on the score, look to the next column, and identify actions that would lead to an improved grade.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Based on the score, look to the next column, and identify actions that would lead to an improved grade.
Based on the score, look to the next column to the right, and identify actions that would lead to an improved grade.


Agree with the team to rescore on a regular basis, say weekly, monthly, quarterly based on available capacity.

Make time for teams to action improvements; ideally make the work part of their normal repsonsibilities, visible on shared work boards as work tickets.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Make time for teams to action improvements; ideally make the work part of their normal repsonsibilities, visible on shared work boards as work tickets.
Make time for teams to action improvements; ideally make the work part of their normal responsibilities, visible on shared work boards as work tickets.


## Resulting Context

- Scoring Grid communicated to all engineering teams and individual contributors
- Implicit understanding of "What good looks like?" at the repository / project level
- Improved perception of Engineering Leadership from Engineers
- Incremental prioritisation of tech debt aimed at improving the quality of code and code bases
- Possible automation of scoring grid; applied daily/nightly to all codebases
- Possible "competitive ranking" of code bases, leading to rewards for following best practice

## Known Instances

Where has this been seen before?

- Company A - 100 strong co-located engineering team with well established InnerSource community - used as guidance for all teams, and published as part of internal InnerSource website - exact grid not published in this pattern
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Company A - 100 strong co-located engineering team with well established InnerSource community - used as guidance for all teams, and published as part of internal InnerSource website - exact grid not published in this pattern
- Company A (100 strong co-located engineering team with well established InnerSource community): Used as guidance for all teams, and published as part of internal InnerSource website (exact grid not published in this pattern)

Suggesting a slightly different format for this sentence (also using the same format for the next sentence).

- Company B - 200 disparate engineers spread across multiple timezones, immature processes and practices - used by engineering managers to communicate a vision for "what good looks like", and used to prioritise engineering led initiatives balanced against product led initatives.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Company B - 200 disparate engineers spread across multiple timezones, immature processes and practices - used by engineering managers to communicate a vision for "what good looks like", and used to prioritise engineering led initiatives balanced against product led initatives.
- Company B (200 disparate engineers spread across multiple timezones, immature processes and practices): Used by engineering managers to communicate a vision for "what good looks like", and to balance engineering-led initiatives against product-led initiatives.

Rephrasing the last part a bit.


## Status

Initial Draft - waiting on review / public commentary

>General pattern status is stored in GitHub's Label tagging - see any pull request.
Note that this GitHub label tagging becomes less visible once the pattern is finalized and merged, so having some information in this field is helpful.
>
>You might store other related info here, such as review history: "Three of us reviewed this on 2/5/17 and it needs expertise before it can go further."

## Author(s)

- John Beech
- TBC (See Acknowledgements)

## Acknowledgements

- Sebastian Spier - initial tracking, and nudge to create this into a pattern

>*Note from John Beech*: If I've missed anyone, or any former colleagues would like to take credit for this pattern, please reach out. My version is based on a memory of a similar grid, folding in my own experience of software development through multiple roles, and my own private open source contributions.

## Related Patterns

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- [Standard Base Documentation](../2-structured/project-setup/base-documentation.md) expands on the importance of `README.md`/`CONTRIBUTING.md` and provides templates for those files.

Adding one more example of a pattern that is related.

It could further be helpful to not only list the pattern title and link, but also a 1 sentence explanation about how the referenced pattern is related to the Scoring Grid.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI I already added the links to the patterns below.

Keeping this thread open so that you can review if you want to add the "Standard Base Documentation" pattern to the list or not.

- [Maturity Model](../2-structured/maturity-model.md)
- [Unified Source Code Inventory](../1-initial/source-code-inventory.md)
- [Good First Project](../1-initial/good-first-project.md)
- [Change the Developer's Mindset](../1-initial/change-the-developers-mindset.md)
- [Change the Middle Management Mindset](../1-initial/change-the-middle-management-mindset.md)