-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove 0mq #43
Remove 0mq #43
Conversation
Kevin and I noticed an issue syncing from full nodes that are behind Sapling testnet height. I will add commits for this soon. However, I did add the logical commit to pull from Sapling height instead of 0 because the core lightwalletd code uses only Sapling. Pulling block prior to 280,000 on testnet doesn't make sense for this case. |
For anyone looking at using this PR to replace ZMQ, it is not ready for production. In particular, it does not handle reorgs yet (we need to manually implement functionality that we were previously relying on ZMQ to provide). Update from mdr0id- please note this is now address as of commit fec37df below |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When there's a reorg, transactions from reorg'd blocks aren't deleted, so I think lightwalletd
clients will continue to be told those transactions exist even when they don't. I think you can double-spend like this:
- Send money to a lightwalletd client, hope there's an accidental small reorg (or make one happen).
- If there is, resend that money somewhere else on the new chain; the client still thinks they got paid.
There needs to be a clear contract between lightwalletd
and its clients for who's responsible for handling what during a reorg. One option is for lightwalletd
to withhold blocks long enough that clients "never" have to worry. Another is for lightwalletd
to promise to keep all the data it returns consistent with the chain it's following (blocks/data get updated, transactions disappear, without any explicit notification to the client), and the client is also responsible for detecting and handling reorgs. If it's the latter, that should be documented.
Edit: raised #52
log.WithFields(logrus.Fields{ | ||
"error": err, | ||
}).Fatal("couldn't start rpc connection") | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might silence the fact that the user's zcash.conf
is messed up somehow. Is there a reason it can't be a fatal error when the zcash.conf
doesn't work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed the key error handling in the load NewRPCcreds or whatever, needs some TLC. I will address soon.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes := cfg.Section("").HasKey("rpcbind")
There exists a few issues with syncing properly on start and stop streams from a full zcashd node. This will now allow lightwalletd to gracefully start/stop with streams of blocks via
getblock
RPC calls. Additionally, this allows the lightwalletd to run from a full zcashd node without-rescan
or power cycling zcashd. Ultimately, this completely removes 0mq logic.The unsynced lightwalletd node will request blocks from current height in local db. If there is no existing database, the lightwalletd node will issue
getblock
from height 0.Once lightwalletd has reached current height, it will poll every 60 seconds to check for the new block from zcashd. There exists a couple upcoming improvements that will be suited for mainnet and more stressful testnet cases, but for now this will remove a majority of existing issues reported in the field.
To test, please ensure your
zcash.conf
specifiesrpcuser
andrpcpassword
(Important note: code assumes your full node is atleast at current height of local DB when you begin
getblock
routine ; I need to add logic for this case)On a side note, @str4d and I have looked into
getblock
performance improvements upstream and PRs are already in place.Should close #11