New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ZCA­007 - Forever growing nullifier set will end up being stored in nonvolatile memory #1390

Open
nathan-at-least opened this Issue Sep 13, 2016 · 3 comments

Comments

Projects
None yet
3 participants
@nathan-at-least
Copy link
Contributor

nathan-at-least commented Sep 13, 2016

No description provided.

@nathan-at-least

This comment has been minimized.

Copy link
Contributor Author

nathan-at-least commented Sep 13, 2016

This is the primary known protocol scaling issue in our design. Fixing it is an open research question. We believe this won't be a concern for at least 2 years, assuming full blocks of JS transactions, with the current protocol.

@daira daira added the not in 1.0 label Sep 14, 2016

@daira

This comment has been minimized.

Copy link
Contributor

daira commented Sep 14, 2016

I honestly don't see how this is even a problem, in principle, at least for full nodes.

Each nullifier is 32 bytes, and there are two of them published for each JoinSplit. Suppose that we have 2 MB blocks full of JoinSplits for 10 years. A JoinSplit is 1802 bytes, so that is at most 1163 JoinSplits per 2.5 minutes. That makes at most 1163 × (60/2.5) × 24 × 365.25 × 10 × 2 × 32 bytes, which is about 156.6 GB, as the maximum size of the nullifier set after 10 years. Of course, this makes a number of assumptions: e.g. that we don't reduce the size of JoinSplits, or increase the number of JoinSplit outputs or the maximum transaction size. In any case, I think it will be relatively straightforward and cheap to store 156 GB ten years from now, or the set so far at any point before then. Indeed, under these (very conservative!) assumptions, I think it is extremely likely that the trend in storage capacity and price will mean that storing the nullifier set on secondary storage (properly indexed, which will add some but not much overhead) is never a serious problem within the life of Zcash.

(The commitment tree would run out first, but for the sake of argument let's assume we expand that as necessary. The depth doesn't need to be expanded very much to accommodate the above assumptions, actually.)

@SergioDemianLerner

This comment has been minimized.

Copy link
Contributor

SergioDemianLerner commented Sep 16, 2016

I agree the issue has low priority. But if the nullifyiers set becomes higher than the available RAM (e.g. 4 GB), verifying transactions will require at least two random disk accesses (or about 30 msec on a non-SDD drive ?). Then transaction execution time will not depend solely on CPU speed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment