-
Notifications
You must be signed in to change notification settings - Fork 679
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1D PeerDAS prototype: Data format and Distribution #5050
Conversation
# Conflicts: # beacon_node/beacon_processor/src/lib.rs # beacon_node/lighthouse_network/src/service/mod.rs # beacon_node/lighthouse_network/src/types/topics.rs # consensus/types/src/consts.rs
Hey @jimmygchen 👋 |
I spoke briefly with @AgeManning at devconnect IST about some efforts to prototype DAS in lighthouse, but I was not aware that there was work underway! I have started on some concurrent work here. the DAS prototype will be my main focus for the time being. I'm not sure how much bandwidth the lighthouse team has allocated to this effort, but in any case, I would be happy to collaborate where it makes sense. |
Yep. We have formed a small internal team to make progress towards a DAS implementation. The idea is keep track of tasks and issues in a github project and document it in this issue: #4983 As we progress we can schedule open calls or create a dedicated discord channel for DAS to help communication around collaboration. @jimmygchen is currently away, but we can organize the best way to collaborate together when he returns. |
self.log, | ||
"Internal error when verifying blob column sidecar"; | ||
"error" => ?err, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should propagate a general ignore validation result here, or still early PoC?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep still early PoC! Plan is to gradually and logically get changes onto the das
branch.
blob_column_sidecar: Arc<BlobColumnSidecar<T::EthSpec>>, | ||
subnet_id: u64, | ||
) -> Result<GossipVerifiedBlobColumnSidecar<T>, GossipBlobError<T::EthSpec>> { | ||
metrics::inc_counter(&metrics::BLOBS_COLUMN_SIDECAR_PROCESSING_REQUESTS); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All checks in https://github.com/ethereum/consensus-specs/pull/3574/files#diff-bacd55427cc3606e97b0f11dada895de104a8f9c532aca270efae967199bf261R123 can be implemented except for the crypto, aliasing verify_data_column_sidecar_kzg_proof
to a noop
blob_column: GossipVerifiedBlobColumnSidecar<T>, | ||
) { | ||
let blob_column = blob_column.as_blob_column(); | ||
// TODO(das) send to DA checker |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Until crypto is ready a basic impl could just check that > 50% of columns per row have been seen, without attempting re-construction otherwise
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Skimmed this, agree to merge to DAS branch and use this as a base to move forward. We can fix any missed bugs as we go
Issue Addressed
This PR contains the initial groundwork of the 1D PeerDAS prototype based on this research post: From 4844 to Danksharding: a path to scaling Ethereum DA. It mostly covers changes mentioned under the Data format and Distribution sections.
EDIT: The PR has now been updated with changes from consensus-spec#3574.
This initial prototype does not currently build column data from erasure coded blobs, but generates random column data based on the number of blobs per block, which means the size of the columns would be the same as if they were built from erasure coded blobs. This will allow us to experiment with distributing the
DataColumnSidecar
objects in a network and gather some initial measurements on bandwidth, feasibility and data availability.Changes
DataColumnSidecar
type. This is the used as the sampling unit for columns:DataColumnSidecar
objects, filled with dummy data, with its size derived from the number of blobs associated with the block, and publish them toNUM_COLUMN_SUBNETS
(32) column subnets over gossip.CUSTODY_REQUIREMENT =1
), with the subnet id determined using theNodeId
(see the compute_subnets_for_data_column function ).Running a Network using Kurtosis
I've got a little devnet running locally using Kurtosis, and it shows blob columns getting published and received by peers. Instructions can be found in this gist.
The below is logs from a node subscribed to 1 column subnet - it shows a proposer publishing column sidecars to all subnets, and receiving column sidecar on the subnets that it's subscribed to (in this case it's column 20, 21, 22 & 23)
Next Steps: Sampling
The next phase of the prototype would cover the sampling of peers, which includes:
SAMPLES_PER_SLOT
)DataColumnSubnetId::compute_subnets_for_data_column
MAX_BLOBS
to 32MAX_BLOBS
will require a hard fork