Report Dispute Resolution Process (RDR)

TLDR: There are some advantages to having a process in place in case our usual RAP process fails.
If, somehow, a report gets approved but has fatal mistakes or a rating is not accurate, it will be useful to give the affected protocols or any user the opportunity to step in and raise their voice.


The following proposal outlines a possible implementation of a Report Dispute Resolution (RDR) process for Prime Rating. This process should only be used in extreme cases meaning that we would need to define clear situation(s) where a dispute request is allowed. This process would need to be started in a timely manner after we publish/approve a report (for example within 10 days after report approval). The RDR process should not be used by protocols to claim a better score but rather provide a formal process to fix errors, in case we miss crucial points. The reason for implementing a RDR process is because, even though Governors are entirely responsible for their governance decisions, it will be beneficial to have another layer of protection if/when Governance fails. Hopefully, this will never be the case but it’s good to have a plan B if it does. It also gives protocols the responsibility to verify and take ratings seriously.

Also, with the upcoming API product that we are going to launch in the next 3-5 months, more users will use this information in their DYOR and it becomes even more important that crucial mistakes can be processed in a formal, efficient, and fast way.

Proposed approach

The idea is to create a decentralized framework, where users/protocols have the possibility to report an official dispute for already published fundamental reports. The RDR is initiated by posting a proposal in our Governance forum outlining the details of the dispute (a template could be used for that).

This process will be permissionless and can be started by anyone. Disputes will be open to discussion for 1 week, with a possible extension to 2 weeks if necessary. This gives the community and Prime members enough time to verify contentious points. After the discussion phase, the RDR will be put up for a Snapshot vote with a timeframe of 3 days for voting. The Snapshot allows Governors & Reviewers to vote for the following options:

  • RDR claim correct, report invalid (new report gets written)
  • RDR claim correct, but report still valid (report gets updated)
  • RDR claim false, report still valid

Proposed RDR Request - Qualification

A protocol can file a dispute if it finds a combination of wrong content that severely impacts the result. An existing report can be challenged if one or more of the following points apply:

  1. The report contains false claims
  2. The report misses highly important information
  3. Report data is utterly outdated (12+ months)

In addition, the impact of the above must be significant, so that:

  1. The score is off by >10% (25 points for full FA report)
  2. The content is damaging the image of the protocol (e.g. report declares the protocol as a scam when it’s not true)

When these scenarios apply, the affected protocol can initiate the RDR process.

Next steps

This is an initial draft of a possible Report Dispute Resolution (RDR) process and aims to collect feedback from the Prime Rating community. I am looking forward to having more feedback in order to start implementing this process.


I support the proposal, extra checks are fine, but this:

just need to determine what are important information and what are reliable sources of that information


I am in favour of this RDR process. My only reservation is the off-score percentage. 25% should be reduced to at least 20%.


Great idea Salome! Giving the protocol’s an ability to engage with the ratings process adds legitimacy and should improve the overall quality of the Prime Rating product.

In my opinion, I don’t see this as an option-of-last-resort and actually think it could be very valuable if it was used more often than just in extreme cases. In this case, I would remove the qualifications for outdated information and score impact significance. My rationale is below:

Outdated Information: Allowing RDR for outdated information only might be overly sensitive and result in a higher number of low impact RDRs. If the information was still accurate as of the date of writing, I feel outdatedness on its own is not a “disputable” issue. That said, addressing a process for frequency of updates to already-rated projects is an important topic that should probably be addressed independently of this process, IMO. Perhaps a RUP (Report Update Proposal) :wink: could be a separate process where a protocol could request its rating to be updated and that would prioritize the project over others in the next rating season or rate-athon (an element of this is already somewhat addressed in the forthcoming Research on Demand (RoD) product where a potential client can place an order for an experienced rater to rate a project of their choosing).

Score Impact Significance: Establishing a threshold for significance in the RDR process is important, but I think defining it on the rating score itself is too subjective. If content is factually inaccurate, a report resubmission or update would fix the issue, but the impact to the rating score would not be known beforehand and could differ depending on the rater that completes the updates (does correcting a factual inaccuracy increase the score 1 point, 5 points, or 10 points?)

I think revising the qualifications to the following would simplify the process a bit and still keep RDRs objective and focused on quality.

  1. The report contains false claims
  2. The report misses highly important information
  3. The content is damaging the image of the protocol (e.g. report declares the protocol as a scam when it’s not true)

Just my two cents. Thanks for putting this up for discussion!


In favour! Makes a lot of sense :slight_smile: Share the sentiment with the @dabar90 and @squidbit.

@squidbit regarding the RUP (Report Update Proposal) - I like the idea not sure what the best way of implementing this is tbh. I concerned that we get a lot of RUP and the Snapshot vote get out of hand

Totally get it, thanks xm3van. That was one of the reasons why I thought including it as part of the RDR process might not be the best approach either. Reports being outdated is not ideal, but my point was we should have a separate process (on or off chain) to deal with it.

1 Like

Maybe will be useful to create a space where rated protocols can put feedback or review because in that way we can improve our reports over time. Fixing critical errors is fine but we, in that case, invest our time in exchange for nothing and I think that every mistake/misunderstanding has some value, especially in this field.
Over time reports will be more complex and we will make mistakes, I expect that


since I wrote the post we “almost” got two feedbacks, both are blocked by a bot. First from Mhairi (UMA) and second from Christachio (Armor.Fi). Certain processes do not need to be fully automated.

  1. agree with this one, for false claims we just need proof
  2. maybe we need to define highly important info
  3. so, every protocol with low score will complain, on the other hand if the rater supports statements with some proof I don’t see problem there
1 Like

I don’t want to be boring but I think this situation is very important and that’s why I will allow myself to post google docs here with feedback from (Ease), more precisely from Christachio. Due to a bot problem I offered Christacio that he can send me docs in DM or here on the forum. As I got it in DM I will attach it in this thread along with the report link. feedback: Noted Errors - Google Docs report:

I hope, very soon I will publish here UMA feedback and report

1 Like

@dabar90 thanks for sharing! We could think about a discord sub-channel protocol complaints or create a typeform and add it to Gitbook in order to trigger the RDR as proposed by @Salome.

The typeform could have something like a layout like this:

  • Quote from report
  • Description of the issue
  • Evidence supporting the issue

Having just written the above and viewed the feedback given by Christachio - maybe something more granular as suggest @squidbit is more appropriate as the feedback given by Christachio is a lot more granular and pertains to certain sections :sweat:


I agree, I like @squidbit approach to that problem , especially this:

I think that we can even more leverage established contact with protocol and communities for mutual benefits. Just my opinion from a multi-product DAO perspective

1 Like

Great discussion! Agree with @squidbit to reduce the qualifications for submitting an RDR.

Drafting a definition for highly important information:
“Information of high significance that is absolutely necessary to accurately answer the question and if missing, would lead to a distorted representation of the protocol’s situation and thus scores.”

Lmk what you think, looking forward to your feedback.

In the case of, this seems to be the case at least once: flagship product not mentioned. The rest looks like false statements.

1 Like

Nice, the RUP is actually something I have proposed to the team before, I’m in support!

Imo a RUP should live on it’s own, independent of RDR and enable raters to update their reports.

Rules would look something like this:

  1. RUP allows raters to update their own reports
  2. Report can be updated once per quarter, or after an event of high significance directly impacting the protocol (e.g. an exploit, or the Terra incident should allow the Anchor rater to adjust the score)
  3. Raters are required to update every section in the report
  4. Rewards are 50% of usual new report (75 USDC, 100 D2D, 5 RXP)

Thanks for putting this together @Salome!
One question about the 3 vote choices, how do we decide if a report is still valid and can be updated, or if it’s invalid and deleted. Any thought on this?


I agree totally in favour my only concern is on practical level - this sound like a lot of snapshot votes - next RAPs, TIPs, RDRs.

1 Like

Thank you so much everyone for the feedback! That is really helpful and gives me new ideas.

@dabar90 I see the classification of important information and its sources being impacted by two factors: misinformation (false claims) in submitted reports and information that impacts the score of the protocol positively or negatively on a specific threshold.
If this threshold is not met, the disputes will still be included in the next RAP submission but it will not trigger a RDR. I think what should be avoided is that we get RDR requests for small errors or arguments of scores which do not really change the overall rating and its quality. I like the definition @Lavi drafted!

@squidbit I see the point that by putting a threshold on the RDP rating score itself, there is a subjective factor involved. Maybe we can define it differently: the errors stated by are a good example, I think protocol/users should just list the disputes and post / submit them and Governors/Community will verify the disputes. The governors will be responsible to evaluate the “importance” of these inputs and its accuracy. This most likely has to be done by checking if the overall score would be improved or not. Also, if the report contains false information, a RDP will be initiated.

@Lavi I assume that an invalid report will be a very unlikely case, as this would require the Governor process in place to completely fail. However, I think it’s good to have this option in place even though it’s a very unlikely scenario. If a report is completely invalid that would mean that most of the information is wrong and that the quality of the report is so off standard that it can not be updated anymore as it has too much false information and is misleading.

Report Update Process (RUP)

I like the idea of having a report update process as I personally feel that writing a report from scratch requires more research and effort than a report with a previous version. It would also motivate raters to stay on top of the updates of protocol/projects they rated. If the community agrees I am happy to post a new thread about this idea in detail where we can discuss this. I agree with @xm3van that this gives us some more operations efforts on snapshots and so on but in order to have updated reports I think we have to approach such a process. As timing is super important in the space. The goal I personally would like to see is live data (still a far fetched dream) but I think that is a goal we wanna work towards to or collaborate with other protocols that can help us achieve this. I particulary like a RUP process because the raters who regularly update their own reports will become experts about certain protocols and that will allow us, in the future, to also expand in providing consulting services, if this is a path we wanna go one day.


Sorry Lavi! Didn’t realize this had been proposed previously. Credit goes to you, lol.

1 Like

Thanks but no need to credit me :slight_smile: I think initial request to enable updates came from the community. I just thought of a few ways to structure it.

Hi @degem2priceless thanks for your support to implement this process. The current proposal mentions:

Meaning that the off-score is given with +10% (which reflects 25+ score points in a FA report). Let me know if you think it should be increased/decreased.

I think it should be at least 20+ instead of 25+. That’s all. Thank you.