The Bia Journey: Beginning on Goerli Testnet

Today, we are launching a critical initiative to promote community involvement in the journey towards the adoption of distributed validators.

The Bia Journey: Beginning on Goerli Testnet
"Bia” feature image by Chloe McGann

Today, we are launching a critical initiative to promote community involvement in the journey towards the adoption of distributed validators.

On our journey towards the adoption of distributed validators, it is critical to get our community involved. In order to do that, today, we are launching an important initiatives: The Bia Testing Program

This initiative allows members of our community to get hands-on in helping us test Obol and spread the word about the importance of Distributed Validator Technology to the broader ecosystem.

Let’s get to the details! 👀

Bia Testing on Görli Tesnet (Full Details)

Bia is the second official testing effort for Obol. In our first testing program, Athena, our community created 200+ distributed validator clusters running across 3+ continents using diverse at-home and cloud-based setups, on Ethereum's Görli testnet. In an effort to push the limits of the technology and to seed an important community with early DVT access for Athena, we targeted at-home and small validators, Ethereum's core node-running community. The performance we saw from those validators was comparable in most cases with larger institutional operators (overall at-home validators in Athena averaged 93% uptime compared to 87% uptime in operators of a top LSP during the same timeframe). An amazing thing to see considering the majority were clusters run by people who had just met in a Discord community! This was the main purpose of Athena, to verify that DVT could enable better validator resiliency (and performance) while also improving operator decentralization, and we believe we achieved that with flying colors.

With Bia we want to build on our learnings from Athena and test DVT at an even larger scale, with a more diverse variety of client combinations, cluster sizes, and deployment environments. To achieve this, we want to activate more DV clusters globally and see how self-sufficient people can be throughout the entire process. In early December, we officially launched the public beta of our Distributed Validator Launchpad to make it easier to set up new clusters. During this testing, we will be putting the DV Launchpad to the test by enabling the community to configure clusters through a web-based user flow!!

Part 1 of DV Launchpad Walkthrough

We will also be stress-testing Charon, our DV middleware, to try to simulate as many different conditions as possible, to gather data on how DVs perform in optimal and suboptimal situations. In order to do so, we will be encouraging groups to test with different node configurations in their DV clusters. These variables include different clients, different geographies, and different hosting environments (i.e. at-home, cloud, etc.). We want to encourage as much diversity as possible in configurations as this contributes to the impact DV middleware can have on anti-correlation, resiliency, and decentralization.

The Bia Testing Program is officially live today (we’re already at >40k registrants!!) and will run until the end of March. For full details, please visit our Bia Testing Handbook.

(Disclaimer: the Bia testing effort is an unincentivized testing program.)