Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Sometimes, transactions disappear from txpool rather than being mined into the next block #14893
Geth version: Version: 1.6.7-stable
I submit several hundred transactions with geth. They are all visible in txpool and all have legitimate nonce values. There are too many to mine into one block, so they should be mined into consecutive blocks.
See the log below for an example where the first two blocks worked but subsequent txns were lost.
107 txns are selected for the first block (nr 104). The block is mined and I see 107 "Remove old pending transaction" log messages.
The txpool is now empty and the remaining txns are left in a limbo state.
From the log below you can see that the nonces for the created txns are from 1322 to 1821.
It looks like:
txpool is empty, first 214 txns are mined, but the rest are in some kind of limbo.
Steps to reproduce the behaviour
Here is my genesis.json
I run geth with
(I use those flags to minimise TRACE logs, the symptoms are the same even if I don't set nodiscover and maxpeers).
And then run this script:
Example console log showing a successful block and then unsuccessful ones.
Seems a few other people are reporting similar symptoms on ethereum stackexchange
referenced this issue
Aug 4, 2017
I've been dong some more digging and it looks like the issue depends on what order the code responds to a ChainHeadEvent.
I've put some log.Info calls into my local copy of geth - see below the above 3 scenarios playing out:
My txpool contains 500 pending txns with nonce from 11675 to 12174 (some lines removed for brevity). In my env each block can fit about 150 txns, so 4 blocks should be sufficient.
Log results from mining these txns:
Scenarios highlighted in this example:
To prove that future blocks won't have any txns mined, see extract below:
Now if I exit geth and restart it, the txpool will be recreated.
And future mining can work as expected (assuming that ChainHeadEvent leads to demoteUnexecutables being completed before commitNewWork is called).
In this example, the ChainHeadEvent led to commitNewWork being called before resetState, so there were still some txns left in limbo.
To recap my understanding of what is happening here: ChainHeadEvent fires and this triggers code in both core/tx_pool.go miner/worker.go. If the code in core/tx_pool.go happens to be executed first then all is well. If, however, the code in miner/worker.go is executed before core/tx_pool.go has finished processing then you can end up with an inconsistent txpool that will cause future mining to not work as expected.
I have the same issue.
Yes looks good on master, thanks: