Expertise Categories

How can the average person know if, for example, research in biochemistry is impactful or not? To expect the average person to have any insight into the intricacies of any subject matter that they’re not familiar with is unrealistic. How then does the protocol maintain the integrity of the validation process? To solve this problem, we introduce the concept of expertise categories. In other words, if we want projects to be reviewed rigorously, they must be reviewed by people with expertise in the knowledge domain(s) of the project, and the reviews of individuals with greater expertise in a specific category should have more weight — corresponding to their level of expertise.

So far everything seems reasonable, but assigning individuals a domain-specific expertise score — that actually corresponds to something meaningful in the real world — in a decentralized system is not so straightforward and involves multiple challenges; how are these expertise scores determined or assigned? Who determines the scores? And what ensures that these scores are meaningful in reality? Also, how can a category expertise score be determined ex-nihilo — in other words, if a new category is created in the ecosystem, how can anyone assign expertise scores to individuals in that category if no one has any expertise in that category to begin with?

Non-transferable tokens:

In our approach category expertise scores are in the form of non-transferable tokens (NTTs/SBTs) that are directly related to the impact a user makes in the category. Users can attain category expertise scores by contributing to public goods projects, by creating estimates, or by validating Estimates.

For users who contribute to public goods projects, the expertise score is determined through the project’s impact estimation, validation, and challenge processes. For Estimators, the expertise score is determined by the validation and challenge processes, while for validators the score is determined through a three-tier validation process followed by a challenge process; in the first tier a group of validators with expertise in project-related categories reviews the project and each validator assigns the project a Credibility Score and a relative impact score (per category), and provides supporting sources and justifications to back their review. The second tier of validators — also with expertise in project-related categories — validates the reviews of the first tier and determines the weight of first tier validators through Quadratic Voting.

While the first and second tiers assign the project a Credibility Score and relative impact score within categories, the third tier of validators — with expertise in categories from across the ecosystem — modulates the overall impact score of the project based on the input they receive from the other two tiers.

Of the three tiers, the first tier receives the greatest portion of the category-specific expertise score allocated to Validators in the Estimate post (due to the difficulty of their task and the associated monetary and reputational risk involved). The second and third tiers receive a smaller portion of the allocated Expertise Score, each validator in proportion to her existing Expertise Score (for the third tier, the Expertise Score is in proportion to the Validator’s overall expertise score, though at a smaller proportion than for second tier validators).

The challenge period is designed to ensure the integrity of the validation. Anyone can challenge a Validator (though the challenger is required to provide supporting sources to justify the challenge), and, if the challenge is successful, the Validator will lose both funds and expertise related to the validation.

Thus, users in the ecosystem have a decentralized mechanism through which Expertise Scores are assigned to contributors, estimators, and validators. These expertise scores are category-specific and are determined by the impact of each user. There are multiple layers of validation and challenges that help ensure the integrity of the process.

In addition to a user’s expertise in the category itself, the protocol also calculates users’ expertise in a category based on expertise the user has in related categories. The “relatedness” coefficient between categories is calculated based on the frequency projects share categories, adjusted for the overall impact of the project and its relative impact in each of the categories. This means that more impactful projects will have greater effect on the coefficients, as well as projects where both categories have significantly more relative impact in each of the categories.

If a user only contributed to or validated Physics projects, and has expertise in that category, the protocol would show that the user has some expertise in math (User’s Math expertise score = User’s Physics expertise score * Math<>Physics relatedness coefficient).

“Grafting” new categories:

For newly created categories (generally created by Estimators while making an estimate for a project) no user would have expertise in the category from the start, and there is no data the protocol can work with to calculate a relatedness coefficient. Therefore, for Validators to be able to review a post in a newly-created category the Estimator must “graft” the new category onto existing categories. When creating the new category the Estimator would specify the related categories, and estimate the expected relatedness coefficient for these categories. Then, during the initial review of the Estimate post Validators (randomly chosen from the related categories and other specified existing categories) can adjust the relatedness coefficient or propose other related categories — thus preventing any manipulation of the system by the Estimator. After several posts where the new category is specified, the protocol will have sufficient data to automatically generate a relatedness coefficient for the new category, and the new category will have some users with expertise in the category — thus successfully grafting the new category onto the protocol.

Last updated