-
-
Notifications
You must be signed in to change notification settings - Fork 235
Closed
Labels
good first issueA default GitHub label inviting first time contributorsA default GitHub label inviting first time contributorshelp wantedA default GitHub label inviting outside helpA default GitHub label inviting outside helpsize: smallEasy and/or quick to doEasy and/or quick to dostage: up for grabstype: feature
Description
Problem you are facing
Mean is a poor metric for summarising scores from multiple sources.
Particularly with small numbers of reviewers.
Since the mean allows a single reviewer who either hates or loves something
to completely swing the score.
Possible Solution
A better metric is median.
An alternative is mean, dropping best and worst.
Context
I was looking at what talks to accept for our conference.
Our nominal cutoff is a score of 4.
I was double checking those that were boarderline.
The first one checked was scored 5, 4, 4, 4, 2, so net score of 3.8
I'ld rather be ignoring that 2 and 5 and be seeing it summarised as 4.
Then when sorting it would have appears in the area of "definately accept",
not "boarderline"
vchuravy
Metadata
Metadata
Assignees
Labels
good first issueA default GitHub label inviting first time contributorsA default GitHub label inviting first time contributorshelp wantedA default GitHub label inviting outside helpA default GitHub label inviting outside helpsize: smallEasy and/or quick to doEasy and/or quick to dostage: up for grabstype: feature