IEF is a framework to standardize the use of attestations to generate data points to measure the impact of agents in the web3 ecosystem. |
---|
Leveraging EAS, IEF proposes an opt-in standardized series of attestations that organizations can use to measure comparable data points to track their impact in the web3 ecosystem. By generating data on the actions being carried out, projects are able to trace their work on-chain and have funders verify their impact.
Measuring the impact generated by public goods projects in web3 is hard.
This obstacle was evident during Optimism's RetroPGF Round2, where measuring impact was a challenge both to Nominees (who didn't know how to measure their impact for others to see) and for badgeholders (who didn't know how to assess the impact a project had).
An innability to measure the impact a public goods project creates ripple effects that harm the Ethereum Ecosystem ability to thrive in the future by:
Projects that are unable to measure and communicate the impact of the work they have done will leave money on the table that badgeholders would have been willing to commit given the right information. Taken to the extreme, if not enough resources are allocated to a project, the public good being provided by it may cease to exist.
In the second round of RetroPGF, badgeholders were given limited guidance on how to assess the impact of the proposed projects. This presented a dual-edged problem.
On one hand, the badgeholders, responsible for evaluating the projects, encountered difficulties when they had to assess projects in fields that were outside of their areas of expertise or familiarity. Without sufficient resources or context-specific knowledge, their ability to thoroughly evaluate the effectiveness and potential impact of these projects was impaired.
On the other hand, and perhaps more critically, the nominees who proposed these projects faced top-down evaluations that might not have taken into account Key Performance Indicators (KPIs) significant to their on-the-ground operations. This issue stemmed from a potential disconnect between the evaluators' perspectives and the realities of hands-on fieldwork.
There are specific indicators and factors that are vitally important and unique to ground-level work, which might not be visible, known, or may even be dismissed as irrelevant from the standpoint of someone who hasn't been involved in such in-situ operations. Consequently, these evaluations might have overlooked some crucial aspects of the projects, thereby affecting the comprehensiveness and accuracy of the assessments.
During the RetroPGF process, the limited amount of information supplied to badgeholders posed a significant challenge. It necessitated an increased commitment of time as badgeholders needed to liaise with their counterparts to determine an effective approach for assessing the impact of projects. They were compelled to sift through the provided information, identify any gaps in data, and deliberate over which metrics should be the determining factors in the allocation of votes.
This process was time-consuming and could potentially become unmanageable as the number of projects involved in RetroPGF grows. The design and expectation of RetroPGF is to see an increase in participating projects. However, with the current system, it may result in an overwhelming workload for badgeholders. They may find it increasingly challenging to thoroughly review each project and thoughtfully distribute their votes due to time constraints.
As it stands, without more comprehensive guidelines and support, the expanding scope of RetroPGF might exceed the badgeholders' capacity to perform careful, conscientious evaluations, compromising the effectiveness and fairness of the entire process. There is an urgent need to streamline and enhance the evaluation framework to ensure its scalability and efficacy as the initiative grows.
Simple, but currently missing foundation
IEF proposes a 3-legged approach to address the challenge in measuring the impact of public goods projects applying for Public Goods funding. This process is only possible with the creation, and upkeep of an Impact Evaluation Framework that provides opt-in guidelines on a) how KPI's to measure impact are created and measured (at a technical level), b) offers a non-exhaustive set of action based attestations that are commonly used by projects to generate data on their work, and the logic behind those data points (these can be used as inspiration on how to create new data points and therefore KPI's), c) the referencig and interweaving of attestations to enable not only output but outcome identification, d) the importance of empowering operators to self-assess, e) guidelines to generate both quantitative and qualitative evaluations, f) the dangers of over-meassuring.
Leveraging this Impact Evaluation Framework, we suggest the following process to evaluate impact:
Stage | Name | Description |
---|---|---|
1 | Attest | Organizations can select from a menu of areas of action that best fit their work. Through an organized and limited offering of options, organizations that are resources constrained and some that are less technically saavy will be able to easily generate their own attestations. |
2 | Measure | Once organizations are familiar with the creation of attestations, and have been using them at length for different milestones, they will be able to complete complex measurements and verifiable qualitative descriptions of the impact their projects have generated. |
3 | Verify | It's time to apply for RetroPGF, the next Gitcoin Round or share your projects development on Giveth? We got you, obtain statistics based on your issued and received attestations and showcase your uptodate impact in the ecosystem. |
4 | Compare | Receive the funding you deserve based on your impact. By having a standardized method to of impact measurement not only will explaining your impact to third parties be easier but also demonstrating your advantages over other players in the ecosystem. Understand and learn how you can improve or share your best practices with others. |
- Explore batch migration of POAP's information to attestations.
- Increase the areas of evaluation available on the site.
- Increase the number of schemas per subject area.
- Generate ad-hoc private data standardized schemas
- Feed data to comparison infrastructure such as Pairwise.
Having anonymous peer review may prevent collusion and retaliation, but can also prevent the development of a healthy and nurturing community feedback loop that enhances projects work and improvement on previous iterations.
Therefore, I suggest running an A|B test in which a segment of evaluators are anonymous and another isn't to explore which group yields better results.