Building Capacity

Accurate estimation of the economic impact of public goods is difficult enough, but in a decentralized system estimates must be done in a transparent way and the data that estimates rely on must be verifiable by anyone. Though this is perhaps the most serious challenge facing the protocol, it was certainly taken into account when designing the protocol, and should not compromise its success in any way.

Essentially, both accurate estimation and verifiable data are capacity issues for the protocol, and the capacity of both is expected to grow over time due to the protocol’s economic dynamics. Let us examine these dynamics to understand why this is the case: it is in the interest of everyone in the ecosystem for impact estimates to be as accurate as possible, and for data to be verifiable. This is due to the fact that the value of the currency depends on credible estimates of the economic value of public goods. What this means in practice is that, initially, validators would want to review public goods projects that have sufficient verifiable data, and whose economic impact it is relatively easy to estimate — at first these are likely to be mostly digital public goods projects, since they are likely to have far more data. Validators would also have a preference for projects that have a greater economic impact, since that incentivizes greater innovation and attracts talent to the ecosystem. Validators also have an economic incentive to prefer validating projects with a greater expected impact, since their estimate is then likely to have greater economic value as well. Yet, they would not want to review projects where the estimate is likely to be inaccurate, or have a high degree of variability — even for high impact projects — since the validators are likely to lose money in the process if they get the validation wrong.

This doesn’t necessarily mean however that a project that is relatively difficult to review will not get any compensation in the ecosystem. Instead, what is more likely to happen is that, given a range of values for the project’s expected economic impact, validators would simply choose a value on the low end of the range where they can be more confident of the minimal impact of the project. Then, postpone a full review of the project until the ecosystem has the tools to assess the full impact of the project — this compromise still gives contributors some compensation for their project, while preserving the value of the currency.

While initially validators may prioritize projects that are both high-impact and relatively easy to review, this is unlikely to be the case for too long. The reason is that in such an ecosystem there will be a very strong economic incentive to develop powerful tools that simplify and improve the estimation process, as well as an incentive by all participants to provide the necessary data to make more credible estimates. The more projects are validated in the protocol the more data people will have to work with on further refining the impact estimation technology (this technology will necessarily be open source and available for all to use). Over time participants will also develop high quality decentralized oracles, that will be able to provide credible real-world data to the protocol, and therefore allow the protocol to work effectively and credibly beyond the digital realm.

This means that, as people develop ever-more sophisticated tools to model the impact of public goods, and the more the amount and quality of data increases, the quality of estimates is likely to improve over time, and the sphere of those projects deemed “easy to review” will increase progressively. Meanwhile, the amount of validation time, labor, and resources needed per project is likely to decrease, thus increasing the efficiency and capacity of the protocol over time.

Last updated