Validator Performance Metrics

This thread is a continuation of a conversation regarding performance metrics & scores on Discord.

I am catching up on the conversation yesterday regarding the scores on Thanks to everyone who shared their opinion.

When I build things, I like to start with a Minimum Viable Product (MVP) and then iterate towards a better product. The original beta scores are my MVP. I agree that we need to measure the right things. I fully expect things to change, for the better, over time. Every journey starts with the first step.

Solana is not a trustless network – there is a significant amount of trust required for an investor to delegate stake to a validator. Transparency and accountability are essential components of that relationship. The metrics and scores will help build both transparency and accountability.

I don’t see the incentive for anyone to game the system here. Lying about software versions or intentionally submitting only ticks will destroy trust with investors. Tell lies to your investors, and you will lose those investors. It matters that validators have some “skin in the game.”

I’ve been watching ‘Skipped after %’ for a while now, and it seems that specific nodes are consistently having trouble. I believe that the topic requires further research to find the cause. In the end, we may find that it is not the validator’s fault, but I plan to keep a spotlight on the metric until we figure it out. Perhaps I will remove that as a scoring factor, but leave it on the page as a supplemental metric (like ping times).

We have all been using ‘Skipped Slot %’ as an essential measurement, so I’ll keep scoring it. However, I have noticed that the scores jump around when they reset after each new epoch. The scores finally settle down as the epoch progresses. I think that a trailing 2-day average will work better here.

Several people here have mentioned the software version as a measure of a validator’s attentiveness to the Solana team’s communications. At this time, I am comparing major.minor.patch to the official release, as announced by the Solana team. In the future, that might change to reflect who is in-sync with the rest of the operators.

Mike, you make a good point on fast v. slow metrics. In the future, we could expand to include both fast-moving averages and slow-moving averages. Like software version, some of the scores will adjust in real-time, and that’s probably OK too.

Leo, excellent points about showing that each operator takes security seriously. There are some steps we can take now to head in that direction.

For starters, we can encourage each validator to post a public version of their security audit to the web. The freely available version of the security review should use a recognized template, like CIS, etc. (Even better if it is a blockchain-specific template.)

If it is a self-audit, the validator will post a PDF to their website. If a third-party audit, the auditor will post to their website instead. A signed “Proof of Publication” can then be posted to the Solana blockchain using the new memo feature. The memo will include a URI for the PDF plus a SHA256 hash of the document. The reader can then use the SHA256 to verify that they have the correct published version.

Send the memo to one of my addresses, and I will put the URI + SHA256 on to I will also review each audit and assign a score of 1-point for a self-audit and 2-points for a third-party review. Is this a perfect solution? No. Is this a step in the right direction? Yes! It will allow investors to review some essential security documentation and initiate a conversation with a validator for further due diligence.

That’s it for now. Keep sending comments/feedback. Thanks!


Thank you @brian.long for putting in the hard yards and creating the

There is so much to dig into these numbers coming out of the app

As validators, we are going to get most out of it, if/when we choose to look into why some numbers/metrics are the way they are …and find ways to improving it at the software level, node level, validator community level, network level, investor level and finally but not the least the users and customers of this blazing fast technology

I for One, have a keen interest to research further on how to we can move the needle on operator effectiveness, productivity, and performance.

Still early days, but I am tinkering with helm-chart to see how to build a wrapper around our Solana binaries …especially when we move to a more professional grade Infrastructure-as-a-Code (IaaC) method of managing our nodes

solana binaries

├── solana
├── solana-bench-exchange
├── solana-bench-tps
├── solana-dos
├── solana-faucet
├── solana-genesis
├── solana-gossip
├── solana-install
├── solana-install-init
├── solana-keygen
├── solana-ledger-tool
├── solana-log-analyzer
├── solana-net-shaper
├── solana-stake-accounts
├── solana-stake-monitor
├── solana-stake-o-matic
├── solana-sys-tuner
├── solana-tokens
├── solana-validator
└── solana-watchtower

I have added some additional charts, but I am concerned about render performance. Let me know if you see any render problems.

I’ve added pagination & sorting. Render time doesn’t suck.


Request for comment:

  1. Remove the ‘Skipped After %’ metric from the score. The team is working on a fix for that problem and a leader’s performance should no longer have an impact on the next leader in line.

  2. Add a score to reward a validator for posting their info the blockchain. 1/2 point each for name, keybase, website, & description. 2 points total. Investors want to know who controls a given validator. The contact information will allow the investor to reach out to the validator for a conversation.

  3. Add a score for public security audits. 1 point for a self-audit, 2 points for a third-party audit. Investors will be comforted to know that a validator has engaged in a security audit and published a public version of that audit. We should be able to use the new memo feature for “proof of publication” with a URI for the audit PDF and a SHA256 hash to verify the document. I will review each audit to determine if it is a self-audit or third-party audit (I won’t evaluate beyond that).

Thoughts? Other suggestions for scoreable-items?

Last weekend, I made some UX enhancements to and updated the scores/data points shown on the site:

  • Removed the ‘Skipped After %’ score since I think the team will soon fix this problem.
  • Added a two-point score for the operator’s engagement with the validator-info feature on the blockchain. Each validator needs to publish information to be visible to investors/delegators. This score should encourage the operators to be visible to the community.
  • Added a one-point score if the operator sends me a link to their web page describing security & operating policies. The weblink also appears on the site. I am not evaluating those policies; I am merely confirming that the page exists and discusses security. It will be up to the investors/delegators to determine if the guidelines are adequate.
  • Enhanced the API to return data points that might be useful for stake-bots.

Please let me know if there are other scores/data points that I should add to the API. Other comments are welcome too!

1 Like