Skip to content

How the "impact" score is calculated

Christiaan Verwijs edited this page Oct 6, 2023 · 8 revisions

The Team Report and Teams Dashboard show an "impact" for each of the tips under "Tips". An example is shown below:

image

What is the purpose of the "Impact" score?

Where do you start improving? We created our platform to help teams to make this easier based on data and great conversations. However, even our model provides over twenty different areas for improvement. Particularly when teams are only just getting started, there may be many areas of the model where improvement is useful and relevant. To not overwhelm teams, we order feedback under "Tips" based on their expected impact on a range from 100 (the top tip) to (close to) 0.

So "Impact" is important for the ordering of feedback. Two things are important here:

  1. The "Impact" score is just a rough approximation based on the data your team provided. If you feel you should improve elsewhere as a team, definitely do so. However, there is a reasoning behind how we calculate the "Impact" score that may help find overlooked areas (see below).
  2. We are still learning how to improve and tweak the calculations behind this score. So the algorithm may change over time.

How is "Impact" calculated in principle?

The simplest way to calculate the impact would be to identify the area with the lowest relative score and assign that a score of 100. While that would allow prioritization, it is also quite one-dimensional. Take this example of a Team Report:

image

This team clearly has a lot of work to do. Many areas are red and indicate that things are getting worse. A team may be inclined to start improving in "Stakeholder Concern" or "Responsiveness" because both those factors are very recognizable. However, our model proposes that there is a structure in these processes. While "Stakeholder Concern" and "Responsiveness" are top-level factors that shape team effectiveness, they need a solid foundation of "Continuous Improvement" and "Team Autonomy" to prosper. In turn, "Management Support" creates the foundation for all of it. This is why the first tip for this team (with Impact 100) is to work on management support first. Our model predicts that if teams are able to increase management support, it is probable that team autonomy and continuous improvement can improve, and in turn can boost the ability of teams to work with stakeholders and be responsive. This reveals that our algorithm looks below the initial symptoms and tries to pinpoint a deeper "cause". This is also why factors such as "Management Support", "Team Autonomy" and "Continuous Improvement" tend to have high impact scores; they really are foundational.

How is "Impact" calculated in practice?

  1. We begin with the top-level factor "Team Effectiveness"
  2. For "Team Effectiveness", we identify all factors that have a direct arrow to it in our model (e.g. "Stakeholder Concern", "Responsiveness") and collect their most recent scores for a team or multiple teams.
  3. We calculate the difference (delta) for each factor identified under 2. If a previous snapshot is available for a team, we calculate the difference as (current score - previous score). Otherwise, we use the benchmark (current score - benchmark score).
  4. We calculate the actual impact score for the factors identified under 2 by taking their delta. We multiply the delta by the empirical standardized effect sizes we found in our studies. For example, we know that "Team Effectiveness" goes up by 0.53 points if "Stakeholder Concern" goes up by 1 point. So we multiply the delta by (1 + effect size). This effectively means that our algorithm takes into consideration the observed empirical effect in the full sample of our study (~2,000 teams).
  5. We then repeat steps 2-4 by recursing through all preceding factors. So for "Team Effectiveness", we go through "Stakeholder Concern" "Responsiveness" and also "Team Morale" and "Stakeholder Satisfaction". We then recurse further back by looking at the factors that have arrows to "Stakeholder Concern", "Responsiveness" and so on. The end result of this is a list of all factors in our model with an impact score that represents how much impact they (very, very roughly) have on team effectiveness if they were to improve.
  6. Finally, we also apply a primacy bonus to all factors. Basically, factors that come earlier in the recursion weigh heavier in our calculations.

The above is a rough description of the algorithm. What it allows is:

  • A prediction of expected impact that is based on empirically observed effects from ~2.000 other teams
  • A prediction that takes into account the areas where you need the most work

We are still figuring out how to improve and tweak this algorithm. But we hope it already offers some helpful guidance on where to start.