New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarify the average score for checks #41
Comments
Hmm, yes, I think we should change the methodology. What do you suggest? |
@jpmckinney please review the text but otherwise the calculation was modified as we agreed. It's a percentage of passed check from total number of passed and failed checks. Maybe this sentence should be changed too as it's not an average anymore |
@yolile @pindec Are these new descriptions easy to interpret?
2 checks with aggregate score:
The pass rate of compiled releases against applicable checks. |
I think "aggregate" and "applicable" are a bit ambiguous. It took me a bit of time to figure out how the new score is calculated. I agree that where there are, say, 7 checks, but only 2 are used to give the 'average' score, the current text is misleading. Would it be clearer to:
Suggested onscreen text:
Suggested explanation:
? Or would it be clearer to also add the 'adjusted pass rates', the basis of the adjusted/aggregated score, also to the summary table? |
+1 to this, but maybe
|
N/A here stands for not applicable. The check is available, but it's not applicable. Why is "adjusted" clearer than "aggregate"? "Aggregate" at least suggests a combination of each check's pass rate. "Adjusted" suggests an adjustment, which seems like less information. How about: "2 out of 7 checks report a pass rate of %X" And then: "N/A results are excluded from the pass rate, which is: all passed / (all passed + all failed)."
I think four percentages per row would be too confusing. |
? So to clarify: the check is available (checks are always available), but some data is not available, so the check is not applicable for all compiled releases (in this case 7%), is that correct? Yes. The label on the detail page is indeed confusing. Created a new issue: https://github.com/open-contracting/pelican/issues/62
Okay, getting there :) My edit:
|
+1 :) |
Forgot to reassign @hrubyjan, since the new text is now agreed as above. |
Blocked by #104 |
On the compiled release page, for each type of check, there's:
We should add an info bubble to explain how the average is calculated.
Also, at the OCDS retreat, people wanted an explanation of the pass/fail numbers in the tables. We can perhaps describe these in the same way we describe the numbers for field-level checks.
The text was updated successfully, but these errors were encountered: