This thread is a continuation of a conversation regarding performance metrics & scores on Discord.
I am catching up on the conversation yesterday regarding the scores on https://www.validators.app/validators/mainnet?locale=en. Thanks to everyone who shared their opinion.
When I build things, I like to start with a Minimum Viable Product (MVP) and then iterate towards a better product. The original beta scores are my MVP. I agree that we need to measure the right things. I fully expect things to change, for the better, over time. Every journey starts with the first step.
Solana is not a trustless network – there is a significant amount of trust required for an investor to delegate stake to a validator. Transparency and accountability are essential components of that relationship. The metrics and scores will help build both transparency and accountability.
I don’t see the incentive for anyone to game the system here. Lying about software versions or intentionally submitting only ticks will destroy trust with investors. Tell lies to your investors, and you will lose those investors. It matters that validators have some “skin in the game.”
I’ve been watching ‘Skipped after %’ for a while now, and it seems that specific nodes are consistently having trouble. I believe that the topic requires further research to find the cause. In the end, we may find that it is not the validator’s fault, but I plan to keep a spotlight on the metric until we figure it out. Perhaps I will remove that as a scoring factor, but leave it on the page as a supplemental metric (like ping times).
We have all been using ‘Skipped Slot %’ as an essential measurement, so I’ll keep scoring it. However, I have noticed that the scores jump around when they reset after each new epoch. The scores finally settle down as the epoch progresses. I think that a trailing 2-day average will work better here.
Several people here have mentioned the software version as a measure of a validator’s attentiveness to the Solana team’s communications. At this time, I am comparing major.minor.patch to the official release, as announced by the Solana team. In the future, that might change to reflect who is in-sync with the rest of the operators.
Mike, you make a good point on fast v. slow metrics. In the future, we could expand to include both fast-moving averages and slow-moving averages. Like software version, some of the scores will adjust in real-time, and that’s probably OK too.
Leo, excellent points about showing that each operator takes security seriously. There are some steps we can take now to head in that direction.
For starters, we can encourage each validator to post a public version of their security audit to the web. The freely available version of the security review should use a recognized template, like CIS, etc. (Even better if it is a blockchain-specific template.)
If it is a self-audit, the validator will post a PDF to their website. If a third-party audit, the auditor will post to their website instead. A signed “Proof of Publication” can then be posted to the Solana blockchain using the new memo feature. The memo will include a URI for the PDF plus a SHA256 hash of the document. The reader can then use the SHA256 to verify that they have the correct published version.
Send the memo to one of my addresses, and I will put the URI + SHA256 on to www.validators.app. I will also review each audit and assign a score of 1-point for a self-audit and 2-points for a third-party review. Is this a perfect solution? No. Is this a step in the right direction? Yes! It will allow investors to review some essential security documentation and initiate a conversation with a validator for further due diligence.
That’s it for now. Keep sending comments/feedback. Thanks!