Step 3: Waiting Lists

Initial Review

Once an Impact Estimate post is successfully submitted to the protocol it has to undergo an initial review — a sort of quality control where a relatively small set of validators checks the parameters of the post. These validators are selected at random and have expertise in the categories specified in the Estimate. The combined expertise score of these validators is a small fraction of the total expertise required for a full validation. The initial review process is a layer of defense for the protocol to prevent Estimators from manipulating the system with misleading information (for example, by setting a low effort level for a project that requires more effort from validators, or by choosing the wrong categories for a project). Validators in the initial review can adjust the project categories and effort required fields, as well as flag any issues in other fields. These validators assign a Credibility Score to the Estimate post (this is separate from the Credibility Score for the project itself, which is provided in the full validation). Validators in the initial review cannot adjust the estimated impact score (or associated validator fee). The Estimate post’s Credibility Score is then used to prioritize the post on relevant Waiting Lists. While it should generally be advantageous for an Estimator to post an Estimate as quickly as possible, both the Estimate post’s Credibility Score and the risk of losing funds due to overestimation should make Estimators more careful in coming up with an accurate estimate. If the Estimator underestimates a project, another Estimator can later amend the Estimate post and collect the difference in the post’s impact score.

Category Waiting Lists

There is no one Waiting List for all Estimate posts. Instead, each category has its own Waiting List, where Estimates are prioritized based on their Credibility Score. Since the labor of Validators is a limited resource, the expected compensation (per Expertise Credit) changes based on the capacity of the category’s Waiting List; More total expertise lowers expected compensation, while more Expertise posts to be validated increases it. Similarly, the compensation per Expertise Credit varies across categories, since different categories may have a different overall Impact Score in the ecosystem — for example, a lot of expertise in Celebrity Gossip may be associated with less overall Impact (and therefore less compensation per expertise credit) in the ecosystem than some expertise in Biochemistry. The expected compensation is adjusted for every added post and does not affect previously added posts.

This mechanism helps Validators track which categories offer a greater return, so they can choose to specialize in those categories. It also allows validation capacity to be effectively maintained across the ecosystem, and gives public goods contributors (and estimators) the tools to predict what is likely to have greater impact in the economy. This mechanism also allows participants in the ecosystem to focus on building tools to increase capacity where it is needed most; for example, they can focus on developing automation tools that reliably substitute for validator labor. They can build tools that make validations faster, thus allowing existing Validators to perform more validations in a given time period. Or they can develop tools/resources that make it easier for more people to become proficient in a category, thus increasing the number of Validators in that category.

The mechanism also avoids the pitfalls of an open market for validation labor, since such a system may allow bad actors to significantly underbid other Validators (potentially even offering free validation) and thus increase their chance of validating posts.

Last updated