Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.Sign up
ZCA007 - Forever growing nullifier set will end up being stored in nonvolatile memory #1390
I honestly don't see how this is even a problem, in principle, at least for full nodes.
Each nullifier is 32 bytes, and there are two of them published for each JoinSplit. Suppose that we have 2 MB blocks full of JoinSplits for 10 years. A JoinSplit is 1802 bytes, so that is at most 1163 JoinSplits per 2.5 minutes. That makes at most 1163 × (60/2.5) × 24 × 365.25 × 10 × 2 × 32 bytes, which is about 156.6 GB, as the maximum size of the nullifier set after 10 years. Of course, this makes a number of assumptions: e.g. that we don't reduce the size of JoinSplits, or increase the number of JoinSplit outputs or the maximum transaction size. In any case, I think it will be relatively straightforward and cheap to store 156 GB ten years from now, or the set so far at any point before then. Indeed, under these (very conservative!) assumptions, I think it is extremely likely that the trend in storage capacity and price will mean that storing the nullifier set on secondary storage (properly indexed, which will add some but not much overhead) is never a serious problem within the life of Zcash.
(The commitment tree would run out first, but for the sake of argument let's assume we expand that as necessary. The depth doesn't need to be expanded very much to accommodate the above assumptions, actually.)
I agree the issue has low priority. But if the nullifyiers set becomes higher than the available RAM (e.g. 4 GB), verifying transactions will require at least two random disk accesses (or about 30 msec on a non-SDD drive ?). Then transaction execution time will not depend solely on CPU speed.