-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Results #34
Comments
Does this test for an accepted results vocabulary? This should be a requirement. |
Disagree with the requirement for a results vocabulary. This is not mandatory in the IATI standard. The fact that the results vocabulary codelist contains a single value - 99 - shows that there isn't consensus on what common vocabularies might be. |
There is no common vocabulary so requesting a link seems like the best approach. |
I agree with @YohannaLoucheur |
Agree with @YohannaLoucheur. In IATI there is no common vocabularies used for results. Requiring the single value 99 doesn't add any value to this test. |
Observation: This detail of this automated test do not match some of the information in the 2024 Methodology document (page 52):
The test posted here only centres on the activity being "current", as defined in #1 (note: is "ongoing" in this text the same as "current"?). The additional states of recent starts or ended projects are not accounted for in the test, imho Secondly, this detail from the Methodology opens a question: how are results then checked? The automated test only looks for evidence of the |
Agree with @Yohanna here, we do not think we can look for a results vocabulary considering the wide range of ways results are currently published. |
Many thanks - apologies for starting a flurry, but I did intend to say the test should be for an accepted indicator vocabulary. As @stevieflow said, how are results being checked? We noted cases of subjective yes/no questions as indicators that were accepted as "results" disclosure, i.e. 'Was there development impact' result Yes/No. Does the methodology in the manual sampling address this? |
@stevieflow and @blundstrom thanks for your comments. @stevieflow, you're right, the automated test does not pick up the 18 month period in which we do not expect actuals against results targets. We expect the results element to be included in all implementation, finalisation and closed activities. We do not expect actuals in new activities (<18m old), but would expect the element to be present with results targets. We pick this up in the manual sampling and fail those activities >18m old that do not have actual results. @blundstrom - can you provide us with any examples? We check the results when we carry out manual sampling and would expect meaningful results to be provided. However, there are no standardised approaches to reporting results across the aid and development organisations we assess in the Aid Transparency Index so we need to make judgements on a case-by-case basis. Results should meet the definition of the indicator from the Index Technical Paper:
|
Description
The results show whether activities achieved their intended outputs in accordance with the stated goals or plans.
This is only expected if the activity is in the implementation, completion or post-completion phase.
This is not expected if the default aid type code is administrative costs (G01).
Current test
Firstly (results data):
Secondly (results documents):
The text was updated successfully, but these errors were encountered: