Bad actors
While the incentives of participants in the ecosystem (contributors, users, estimators, validators and investors) generally align, this is not necessarily the case for public goods contributors whose project is under review. In that situation, while other participants benefit from an accurate impact estimate, the contributor benefits from overestimation of the project. Note that this is not the case for contributors as a group; contributors don’t benefit from impact estimates being generally overestimated, since that hurts the value of the currency. They only benefit if project estimates are generally accurate, while their particular project is overestimated.
What are then some strategies a bad actor-contributor may employ to get an overestimate? Since we’re dealing with a permissionless system, the contributor may create countless accounts and post multiple projects for review — even if 1 in 100 gets some funding the contributor would be profitable. The contributor may provide a fraudulent impact estimate, or collude with validators to produce fraudulent estimates. She may also try to create multiple fake accounts to validate her own project. Or, she may try to produce fake data (through bots, for example) to give the impression that her project has more impact than it really does. Finally, if she is successful in getting overcompensated for her project, she may withdraw her funds quickly so that, even if exposed later, she’d be able to get away with the money.
Now let’s consider how the protocol systematically addresses these fraudulent strategies:
Funding validations:
To maintain a permissionless system while avoiding sybil attacks, contributors (or Estimators) will be required to provide funds for the validation process. These funds should be in proportion to the expected impact of the project (since higher impact projects need to be scrutinized more carefully), and would go toward validating the project. If a contributor overestimates the expected impact of the project, she would simply lose the extra funds. If she significantly overestimates the impact of the project (or if the project has no value) she is likely to lose more than she’s likely to make. It is therefore in her economic interest to accurately estimate the expected impact of the project. There is also no benefit to creating multiple accounts in such a system, since the contributor’s expenditure will always be in proportion to the expected impact. The protocol will also have a mechanism for contributors to get funding for submitting projects, so that no one is restricted from submitting legitimate projects due to financial difficulties.
Random selection:
To prevent bribing or collusion with validators, the protocol will select validators at random, which means that a contributor will not have control over who validates the project.
Merit-based validations:
Now what about creating multiple accounts to increase the chance of ending up on the list of validators? While it’s true that in a permissionless system any user can create as many accounts as she wants, creating multiple accounts in the Abundance Protocol would bring no benefit to a user since validators are selected based on merit — their domain-specific Impact Score (or “expertise”) in a relevant category (for assessing a project’s credibility and impact), and general Impact Score (for assessing overall impact). Validations are then weighted by the Impact Score of each validator. These Impact Scores are non-transferable between users, which means that the only way to acquire these is by earning them: by either creating public goods or reviewing public goods projects as a Validator. Having one account or three accounts would therefore make no difference, since the amount of effort to generate an Impact Score will still be the same. Similarly, since Validators are selected based on their Impact Score, 3 accounts with an Impact Score of 100 would have a combined equivalent chance to be selected to validate as 1 account with an Impact Score of 300.
Though the purpose of this approach is to prevent sybil attacks and low-quality validations, it creates a problem for honest validators who don’t already have an expertise score in the category or are new to the protocol. These individuals can still get an expertise score either by contributing to a public goods project, creating an estimate post, and so on, however, these processes take a long time and create unnecessary hardship for new validators. For that reason, we are proposing two additional paths to participate in a category’s validation: the first path is for validators who have expertise scores in non-related categories. These can be selected at random into the first validation tier with a probability that corresponds to their overall expertise score (from the pool of those who opted in). The second path is for new users to the protocol that don’t have any expertise. These can be selected at random. Validators from both paths will have a small number of slots available in each validation, to avoid burdening validators with having to review low quality validations.
Time-locked funds:
The protocol allows a certain time for anyone to challenge validations after the validation process is concluded. A challenger must provide funding for a new randomized set of validators, as well as present sources for the challenge (merely not liking the results is not enough). During the challenge period funds will be locked in the contract to prevent contributors from “cashing out” their funds.
Periodic reviews:
Following the initial estimation of a project’s impact, there are periodic reviews of the actual impact of the project during that time period. This provides an additional level of integrity to the protocol and allows validators to further refine their estimates based on new data.
Ecosystem dynamics:
So far, we’ve discussed protocol-based solutions to bad actors, what is equally important however are the ecosystem dynamics that stem from the incentives that the protocol creates; since all participants in the ecosystem have an incentive to keep the integrity of estimates, and people are compensated for creating public goods, participants have an incentive to create AI, ML, and other powerful tools to detect fraudulent activity throughout the ecosystem. This means that even if bad actors develop their own tools to attack the ecosystem, there will always be greater economic incentive to develop public countermeasures. Another ecosystem dynamic has to do with collusion among bad actors; the ecosystem creates a strong disincentive to collude because there is a clear incentive in the ecosystem to expose fraud (this too is a public good after all). Since anyone can expose fraud, and be rewarded for doing so by the ecosystem, anyone who participates in a conspiracy can defect and expose all the other bad actors. This makes collusion in the ecosystem very risky and impractical.
Last updated