Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testnet MVP #96

arnetheduck opened this Issue Feb 7, 2019 · 0 comments


None yet
1 participant
Copy link

arnetheduck commented Feb 7, 2019


General idea is to switch beacon_node backend to libp2p using daemon and develop a simple protocol on top using SSZ as serialization.

Further, the idea is to introduce a management layer that contains logic for dealing with retrying requests and coordinating peer scoring. Basically, the attestation and block pools signal the hashes they need and a separate layer decides on the logic to fetch these from the peer layer (how many concurrent requests, when to retry the same block). When blocks arrive, either from broadcasts or requests, they should flow into the pool the same way.

  • Switch to libp2p via daemon (postponed)
  • Specify simple SSZ-based messages for network operations

State sync

When joining the testnet, client will be behind. We will regularly restart the testnet in the beginning, thus we primarily need to have the capability to catch up by "full sync" - downloading all blocks. The other case where blocks are needed is when an attestation or block is received, and the dependent blocks are not (lost in translation, missing history, unknown fork etc)

  • Block request (request by hash or equivalent)
  • State recovery (low prio)
  • State diff / light client (low prio)


After validating or proposing blocks, these will be (naively) gossiped all other participants so they can count votes and decide on forks. The most simple implementation idea seems to be to publish attestations with a single signature, then aggregate lazily as needed (for example when proposing a block).

  • Attestations
  • Proposer blocks

libp2p considerations

When switching to libp2p, make sure these issues are covered and used correctly, as a minimum:

  • Peer/service discovery, including features - use private libp2p network or piggyback on ipfs?
  • Version negotiation - spec version, protocol features

Fork management

Forks start with the latest finalized block and build a tree of possible futures from there. The idea is to manage known blocks and attestations as a collection, and take action to fill out that collection as needed.

There are many race conditions that all need to be handled gracefully, and the code should have room to modify the strategy for handling these:

  • attestation with unknown block
  • block with unknown parent
  • etc

One problem to consider is that of worst case performance, in case of malicious blocks being posted by validators (for example, lots of unviable forks / blocks causing data structure and network traffic growth)

  • Attestation pool
  • Block pool

Validator management

Adding and removing validators is somewhat in flux, so for now the plan is to not use the ETH1 contract for this feature. Initially, we'll just publish JSON files with validator data (priv key etc) and manage overlap socially. The majority will likely be used in pre-configured beacon nodes running on a server, while some will be reserved for developers to play with.

Potential issues include two people running the same validator - this is a feature as it will help us discover issues when this happens (for example if we receive an attestation signed by our own key that we did not send, this is a warning sign that the private key is being reused).

  • Share validator JSON, let people manage socially
  • Web service to add/remove validators (low prio)


We'll initially deploy one or more boot nodes on a server, each hosting a number of validators. The general idea is to restart the testnet frequently. People wanting to connect will get genesis and validator information from the server via.. whatever (http listing).

  • Servers & automated deployment
  • Monitoring (logs etc)
  • Extra points for having a Grafana or similar deployment, with graylog or elasticsearch/logstash collecting logs from the nodes, to be able to monitor the network real-time


Don't wanna run a big network just yet 😄

  • Allow configuration of shard count etc, so as to create a smaller network

Spec updates

The general idea is to follow spec releases by updating every time there's a new upstream release.

  • Verify / review v0.5.1 compatibility to reach a stable point
  • Version / release nim-beacon-chain according to spec version it supports

@mratsim mratsim pinned this issue Feb 20, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.