-
Notifications
You must be signed in to change notification settings - Fork 962
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PeerDAS: Implement IncrementalDAS #14109
Conversation
393f615
to
d1c581f
Compare
09c71fe
to
33aba8f
Compare
beacon-chain/core/peerdas/helpers.go
Outdated
|
||
// hypergeomCDF computes the hypergeometric cumulative distribution function. | ||
// https://en.wikipedia.org/wiki/Hypergeometric_distribution | ||
func hypergeomCDF(k, M, n, N uint64) float64 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we get some unit tests for CDF ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could, but this function is actually already 100% tested as part of ExtendedSampleCount
.
This function is not used outside of ExtendedSampleCount
.
ExtendedSampleCount
has its own tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you still think it’s worth adding tests?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doesn't need to be extensive, but I think some basic unit tests would be useful for the fact this is a statistical function that we might use in other places in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed in 8a926f3.
@@ -116,7 +116,7 @@ var RPCTopicMappings = map[string]interface{}{ | |||
// DataColumnSidecarsByRange v1 Message | |||
RPCDataColumnSidecarsByRangeTopicV1: new(pb.DataColumnSidecarsByRangeRequest), | |||
// DataColumnSidecarsByRoot v1 Message | |||
RPCDataColumnSidecarsByRootTopicV1: new(p2ptypes.BlobSidecarsByRootReq), | |||
RPCDataColumnSidecarsByRootTopicV1: new(p2ptypes.DataColumnSidecarsByRootReq), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why a new type for this ? Underneath it is the same object type
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because now we don't use any more blob
in place of dataColumn
.
A solution could be:
- Remove the newsly created type
- Rename the old typ
BlobSidecarsByRootReq
intosidecarsByRootReq
What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah ok, I think its fine to leave it as you have done it. Our reusing of the blob types was more for code velocity rather than any correctness reasons previously
beacon-chain/sync/service.go
Outdated
@@ -254,7 +254,7 @@ func (s *Service) Start() { | |||
|
|||
// Run data column sampling | |||
if features.Get().EnablePeerDAS { | |||
go s.dataColumnSampling(s.ctx) | |||
go s.DataColumnSamplingLoop(s.ctx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe call it Routine
instead of loop to be in more inline with how we refer to background services. Another word such as Service
works too
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed in 37a4bba.
"missingColumns": missingColumnsList, | ||
}).Warning("Failed to sample some requested columns") | ||
// Ramdomize all columns. | ||
columns := randomSlice(columnsCount) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we want to only sample columns that we do not custody, correct ? this function would do so for all our columns
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's actually not 100% clear to me.
If we do so, nodes custodying all the columns won't sample any columns.
Node custodying at least 50% of the columns will reconstruct and save the 50% missing.
==> Node custodying 50% of the columns will actually custody 100% of the columns. Should they sample columns as well or not?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should definitely be distinct, there is 0 benefit in sampling a column you already know is available via custody. If you sample it again it doesn't push anything forward in whether we know a particular block is available.
If we do so, nodes custodying all the columns won't sample any columns.
Node custodying at least 50% of the columns will reconstruct and save the 50% missing.
If you see all the columns from gossip, then there isn't any benefit to sampling them, no ? The benefit I see is if you do not actually see this via gossip.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
New design:
- If we custody
>= 64
columns ==> No need to sample. Details are:- If we receive these columns, then we are able to reconstruct everything ==> No need to sample
- If some are missing, then the block won't be eligible for the fork choice ==> No need to samle
- If we custody
< 64
columns:- We start to sample with incrementalDAS
min(16, 64-ourCustodyColumnsCount)
, as soon as we receive the block, columns we should NOT custody.
- We start to sample with incrementalDAS
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed in 92a1bfe.
b88b40d
to
65f03ac
Compare
bd61881
to
7a5f2bf
Compare
} | ||
} | ||
|
||
log.Warning("CCCCCC") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is not needed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My bad. Fixed in 655f926.
Please read commit by commit.
The pull requests implements IncrementalDAS.