Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chain rewind and failure to sync with err=missing parent after non-graceful restart of nodes #1117

Closed
vdamle opened this issue Jan 21, 2021 · 16 comments · Fixed by #1149
Closed
Assignees

Comments

@vdamle
Copy link
Contributor

vdamle commented Jan 21, 2021

System information

Geth version: Geth/v1.9.7-stable-c6ed3ed2(quorum-v20.10.0)

OS & Version: linux-amd64

Branch, Commit Hash or Release: quorum-v20.10.0

Expected behaviour

On a non-graceful shutdown/restart of a gcmode=full, syncmode=full node, it is expected that the local full block may rewind to a block in the past. However, the node must be able to sync to latest by fetching missing headers/blocks and rebuild any missing state.

Actual behaviour

The node must be able to fetch missing headers/blocks and rebuild state from the previous full block to catch up with the rest of the chain. Instead, we see that the node

WARN [01-19|13:46:48.982] Head state missing, repairing chain      number=2668720 hash=98b961…a8cf87
DEBUG[01-19|13:47:48.961] Ancient blocks frozen already            number=2668720 hash=98b961…a8cf87 frozen=2578720
DEBUG[01-19|13:48:44.456] Privacy metadata root                    hash=000000…000000
INFO [01-19|13:48:44.458] Rewound blockchain to past state         number=2457534 hash=e6aa7c…3019d5
DEBUG[01-19|13:48:44.462] Privacy metadata root                    hash=000000…000000
INFO [01-19|13:48:44.474] Loaded most recent local header          number=2668720 hash=98b961…a8cf87 td=2668721 age=2m13s
INFO [01-19|13:48:44.475] Loaded most recent local full block      number=2457534 hash=e6aa7c…3019d5 td=2457535 age=1w5d14h
INFO [01-19|13:48:44.475] Loaded most recent local fast block      number=2668720 hash=98b961…a8cf87 td=2668721 age=2m13s
DEBUG[01-19|13:48:48.985] Ancient blocks frozen already            number=2457534 hash=e6aa7c…3019d5 frozen=2578720
WARN [01-19|13:49:37.519] Truncating freezer table                 database=/qdata/ethereum/geth/chaindata/ancient table=headers  items=2578720 limit=2457535
WARN [01-19|13:49:38.035] Truncating freezer table                 database=/qdata/ethereum/geth/chaindata/ancient table=hashes   items=2578720 limit=2457535
WARN [01-19|13:49:38.349] Truncating freezer table                 database=/qdata/ethereum/geth/chaindata/ancient table=bodies   items=2578720 limit=2457535
WARN [01-19|13:49:38.588] Truncating freezer table                 database=/qdata/ethereum/geth/chaindata/ancient table=receipts items=2578720 limit=2457535
WARN [01-19|13:49:39.079] Truncating freezer table                 database=/qdata/ethereum/geth/chaindata/ancient table=diffs    items=2578720 limit=2457535
INFO [01-19|13:49:39.497] Rewind ancient data                      number=2457534
WARN [01-19|13:49:39.497] Truncate ancient chain                   from=2668720 to=2457534

INFO [01-19|13:49:49.874] Block synchronisation started
DEBUG[01-19|13:49:49.874] Synchronising with the network           peer=167a43cb1b335504 eth=99 head=98b961…a8cf87 td=2668721 mode=full
INFO [01-19|13:49:49.874] Mining aborted due to sync
DEBUG[01-19|13:49:49.874] Retrieving remote chain height           peer=167a43cb1b335504
DEBUG[01-19|13:49:49.874] Fetching batch of headers                id=167a43cb1b335504 conn=staticdial count=1 fromhash=98b961…a8cf87 skip=0 reverse=false
DEBUG[01-19|13:49:49.875] Remote head header identified            peer=167a43cb1b335504 number=2668720 hash=98b961…a8cf87
DEBUG[01-19|13:49:49.875] Looking for common ancestor              peer=167a43cb1b335504 local=2457534 remote=2668720
DEBUG[01-19|13:49:49.875] Fetching batch of headers                id=167a43cb1b335504 conn=staticdial count=12 fromnum=2668543 skip=15 reverse=false
DEBUG[01-19|13:49:49.990] Found common ancestor                    peer=167a43cb1b335504 number=2668719 hash=29d96f…cc4d44
DEBUG[01-19|13:49:49.990] Downloading transaction receipts         origin=2668720
DEBUG[01-19|13:49:49.990] Directing header downloads               peer=167a43cb1b335504 origin=2668720
DEBUG[01-19|13:49:49.990] Downloading block bodies                 origin=2668720
DEBUG[01-19|13:49:49.990] Fetching batch of headers                id=167a43cb1b335504 conn=staticdial count=128 fromnum=2668911 skip=191 reverse=false
DEBUG[01-19|13:49:49.991] Fetching batch of headers                id=167a43cb1b335504 conn=staticdial count=192 fromnum=2668720 skip=0   reverse=false
DEBUG[01-19|13:49:49.991] Fetching batch of headers                id=167a43cb1b335504 conn=staticdial count=192 fromnum=2668721 skip=0   reverse=false
DEBUG[01-19|13:49:49.991] Inserting downloaded chain               items=1       firstnum=2668720 firsthash=98b961…a8cf87 lastnum=2668720 lasthash=98b961…a8cf87
DEBUG[01-19|13:49:49.992] No more headers available                peer=167a43cb1b335504
DEBUG[01-19|13:49:49.992] Header download terminated               peer=167a43cb1b335504
DEBUG[01-19|13:49:49.992] Data fetching completed                  type=bodies
DEBUG[01-19|13:49:49.992] Block body download terminated           err=nil
DEBUG[01-19|13:49:49.992] Data fetching completed                  type=receipts
DEBUG[01-19|13:49:49.992] Transaction receipt download terminated  err=nil
DEBUG[01-19|13:49:49.997] Pruned ancestor, inserting as sidechain  number=2668720 hash=98b961…a8cf87

DEBUG[01-19|13:50:17.872] Downloaded item processing failed on sidechain import index=1 err="missing parent"
DEBUG[01-19|13:50:17.873] Reset ancient limit to zero
DEBUG[01-19|13:50:17.873] Synchronisation terminated               elapsed=27.998s
DEBUG[01-19|13:50:17.873] Reset ancient limit to zero

Another instance:

WARN [01-18|14:11:48.465] Head state missing, repairing chain      number=2658019 hash=a6a383…49463b
DEBUG[01-18|14:17:31.749] Privacy metadata root                    hash=000000…000000
INFO [01-18|14:17:31.751] Rewound blockchain to past state         number=1959935 hash=3fd03e…bfc4e2
DEBUG[01-18|14:17:31.762] Privacy metadata root                    hash=000000…000000

INFO [01-18|14:17:31.779] Loaded most recent local header          number=2658019 hash=a6a383…49463b td=2658020 age=5m49s
INFO [01-18|14:17:31.779] Loaded most recent local full block      number=1959935 hash=3fd03e…bfc4e2 td=1959936 age=1mo1w3d
INFO [01-18|14:17:31.779] Loaded most recent local fast block      number=2658019 hash=a6a383…49463b td=2658020 age=5m49s
DEBUG[01-18|14:17:48.485] Ancient blocks frozen already            number=1959935 hash=3fd03e…bfc4e2 frozen=2568019

WARN [01-18|14:19:40.982] Truncating freezer table                 database=/qdata/ethereum/geth/chaindata/ancient table=diffs    items=2568019 limit=1959936
WARN [01-18|14:19:41.129] Truncating freezer table                 database=/qdata/ethereum/geth/chaindata/ancient table=headers  items=2568019 limit=1959936
WARN [01-18|14:19:42.502] Truncating freezer table                 database=/qdata/ethereum/geth/chaindata/ancient table=hashes   items=2568019 limit=1959936
WARN [01-18|14:19:42.670] Truncating freezer table                 database=/qdata/ethereum/geth/chaindata/ancient table=bodies   items=2568019 limit=1959936
WARN [01-18|14:19:42.890] Truncating freezer table                 database=/qdata/ethereum/geth/chaindata/ancient table=receipts items=2568019 limit=1959936
INFO [01-18|14:19:43.215] Rewind ancient data                      number=1959935
WARN [01-18|14:19:43.215] Truncate ancient chain                   from=2658019 to=1959935

DEBUG[01-18|14:19:53.520] Synchronising with the network           peer=167a43cb1b335504 eth=99 head=0e17e2…2c9686 td=2658078 mode=full
INFO [01-18|14:19:53.520] Mining aborted due to sync
DEBUG[01-18|14:19:53.520] Retrieving remote chain height           peer=167a43cb1b335504
DEBUG[01-18|14:19:53.520] Fetching batch of headers                id=167a43cb1b335504 conn=staticdial count=1 fromhash=0e17e2…2c9686 skip=0 reverse=false
DEBUG[01-18|14:19:53.520] Discarded propagated block, too far away peer=838f19a15fb6d8ef number=2658078 hash=77acc6…11dcf8 distance=698143
DEBUG[01-18|14:19:53.521] Remote head header identified            peer=167a43cb1b335504 number=2658077 hash=0e17e2…2c9686
DEBUG[01-18|14:19:53.521] Looking for common ancestor              peer=167a43cb1b335504 local=1959935 remote=2658077
DEBUG[01-18|14:19:53.521] Fetching batch of headers                id=167a43cb1b335504 conn=staticdial count=12 fromnum=2657900 skip=15 reverse=false
DEBUG[01-18|14:19:53.550] Found common ancestor                    peer=167a43cb1b335504 number=2658012 hash=36c183…b224c5
DEBUG[01-18|14:19:53.550] Directing header downloads               peer=167a43cb1b335504 origin=2658013
DEBUG[01-18|14:19:53.550] Downloading block bodies                 origin=2658013
DEBUG[01-18|14:19:53.550] Downloading transaction receipts         origin=2658013
DEBUG[01-18|14:19:53.556] Inserting downloaded chain               items=64      firstnum=2658013 firsthash=a28680…c5e3de lastnum=2658076 lasthash=965844…d416f9
DEBUG[01-18|14:19:54.182] Pruned ancestor, inserting as sidechain  number=2658013 hash=a28680…c5e3de

DEBUG[01-18|14:19:54.216] Injected sidechain block                 number=2658075 hash=76283c…2e7a36 diff=1       elapsed=54.231µs  txs=0 gas=0 uncles=0 root=0bb28c…0becd1
DEBUG[01-18|14:19:54.217] Injected sidechain block                 number=2658076 hash=965844…d416f9 diff=1       elapsed=36.668µs  txs=0 gas=0 uncles=0 root=0bb28c…0becd1
DEBUG[01-18|14:19:56.555] Fetching batch of headers                id=167a43cb1b335504 conn=staticdial count=192 fromnum=2658077 skip=0   reverse=false
DEBUG[01-18|14:19:59.560] Fetching batch of headers                id=167a43cb1b335504 conn=staticdial count=192 fromnum=2658077 skip=0   reverse=false
DEBUG[01-18|14:20:02.563] Fetching batch of headers                id=167a43cb1b335504 conn=staticdial count=192 fromnum=2658077 skip=0   reverse=false
DEBUG[01-18|14:20:05.567] Fetching batch of headers                id=167a43cb1b335504 conn=staticdial count=192 fromnum=2658077 skip=0   reverse=false
DEBUG[01-18|14:20:08.568] Fetching batch of headers                id=167a43cb1b335504 conn=staticdial count=192 fromnum=2658077 skip=0   reverse=false

DEBUG[01-18|14:20:23.185] Downloaded item processing failed on sidechain import index=64 err="missing parent"
DEBUG[01-18|14:20:23.185] Reset ancient limit to zero
DEBUG[01-18|14:20:23.185] Synchronisation terminated               elapsed=29.665s
DEBUG[01-18|14:20:23.185] Reset ancient limit to zero
WARN [01-18|14:20:23.185] Synchronisation failed, retrying         err="no peers to keep download active"

Steps to reproduce the behaviour

Perform a non-graceful restart of a node with blocks in both Ancients/Freezer and Level DB on Quorum v20.10.0

Backtrace

None

@vdamle
Copy link
Contributor Author

vdamle commented Jan 22, 2021

In some instances where the node doesn't throw missing parent errors, we see a:

ERROR[01-19|08:11:16.626] Block receipts missing, can't freeze     number=5123119 hash=226809…b04220

and the node is unable to push blocks into the ancients DB after this.

This issue appears to be fixed in Geth v1.9.10 and subsequently in Geth v1.9.14:

ethereum/go-ethereum#20287
ethereum/go-ethereum#21045

There's also a significant re-write involving the functionality for chain repair in Geth v1.9.20:

ethereum/go-ethereum#21409

@amalrajmani
Copy link
Contributor

hi @vdamle
I tried to reproduce the issue at my end with both istanbul and raft consensus and I am not getting any errors.

I did the following to reproduce the issue:

Raft:

  1. brought up a 4 node quorum network with raft consensus with --immutabilitythreshold 200 so that old blocks could start freezing when the block height goes beyond 200 .
  2. created many transactions (by deploying smart contracts) to allow block height to reach ~250.
  3. Stopped node2 with kill -9 <pid> command
  4. created many transactions (by deploying smart contracts) to allow block height to reach ~850.
  5. Started node2
  6. Checked the block height in node2. It was in sync with the network.
  7. Sent new transactions to node2. The transactions were accepted and blocks were getting mined

Istanbul:

  1. brought up a 7 node quorum network with istanbul consensus with --immutabilitythreshold 200 so that old blocks could start freezing when the block height goes beyond 200 .
  2. Waited for block height to reach ~250.
  3. Stopped node2 with kill -9 <pid> command
  4. Waited for block height to reach ~850.
  5. Started node2
  6. Checked the block height in node2. It was in sync with the network.
  7. Sent new transactions to node2. The transactions were accepted and blocks were getting mined.

Can you give me the following details about your network so that I could analyse further:

  • Consensus engine
  • Genesis config
  • number of nodes in your network
  • Geth command used with command line arguments
  • Were you sending many transactions continuously to the nodes including the one you stopped?
  • How did you stop the node? what command?
  • When did you start the node again?
  • Did the node crash after the error?
  • Was the node in sync with network?

@amalrajmani amalrajmani self-assigned this Jan 23, 2021
@vdamle
Copy link
Contributor Author

vdamle commented Jan 25, 2021

Hi @amalrajmani , thank you for taking a look at this. We have hit this error in multiple Kaleido environments running IBFT. I'm using one of the recent occurrences (from which the original logs were posted) as reference for your questions:

  • Consensus Engine - IBFT
  • Genesis config:
"config":{
  "homesteadBlock":0,
  "eip150Block":0,
  "eip155Block":0,
  "eip158Block":0,
  "byzantiumBlock":0,
  "isQuorum":true,
  "istanbul":{"epoch":30000,"policy":0},
  "chainId":2104950325,
  "constantinopleBlock":3702732,
  "petersburgBlock":5122843,
  "istanbulBlock":5122843
}
  • Number of nodes: 4
  • Geth command used with command line arguments
--datadir <path> --nodiscover --nodekey <path>/nodekey --maxpeers 200 --txpool.pricelimit 0 --rpc --port 30303 --rpcport 8545 --rpcaddr 0.0.0.0 --ws --wsport 8546 --wsaddr 0.0.0.0 --unlock 0 --password <path>/passwords.txt --ipcpath=<path>/geth.ipc --permissioned --mine --rpcapi admin,db,eth,debug,miner,net,shh,txpool,personal,web3,istanbul --wsapi admin,db,eth,debug,miner,net,shh,txpool,personal,web3,istanbul --istanbul.blockperiod 5 --istanbul.requesttimeout 15000 --syncmode full --gcmode full --rpccorsdomain '*' --wsorigins '*' --rpcvhosts '*' --txpool.globalslots 4096 --txpool.accountslots 64 --txpool.globalqueue 1024 --txpool.accountqueue 256 --cache 64 --allow-insecure-unlock --miner.gastarget 804247552 --miner.gasprice 0 --immutabilitythreshold 0 --networkid 2104950325 --verbosity 4
  • Yes, transactions are being sent to the node (fairly low volume of txns, the tx queue probably had 8-10 transactions). We've also seen this occur on a node to which no transactions are sent during this time.

  • The node is running in Kubernetes with a specified resource limit on the geth container (memory/CPU). We've observed that the node hits the upper bound of the memory limit (around 2 GB) and Kubernetes terminates the pod as a result - k8s uses SIGKILL to terminate the process (geth) that is being monitored.. Kubernetes auto-restarts the pod.

  • Most of the instances where we've seen this error has not resulted in a crash of the node, but it is stuck at a block that is way behind the chain HEAD. In one instance, we did see the following crash:

DEBUG[01-19|10:24:45.594] Dereferenced trie from memory database   nodes=3  size=980.00B   time=5.768µs   gcnodes=5942 gcsize=1.27MiB    gctime=10.122239ms livenodes=34 livesize=5.06KiB
DEBUG[01-19|10:24:45.594] Dereferenced trie from memory database   nodes=16 size=2.49KiB   time=14.018µs  gcnodes=5958 gcsize=1.28MiB    gctime=10.136154ms livenodes=18 livesize=2.57KiB
DEBUG[01-19|10:24:45.631] Privacy metadata root                    hash=000000…000000
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x60 pc=0xbe0de4]
goroutine 2118 [running]:
github.com/ethereum/go-ethereum/core/state.(*StateDB).Prepare(...)
	/work/build/_workspace/src/github.com/ethereum/go-ethereum/core/state/statedb.go:802
github.com/ethereum/go-ethereum/core.(*statePrefetcher).Prefetch(0xc0001f8fa0, 0xc00127a120, 0x0, 0xc00517c000, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/work/build/_workspace/src/github.com/ethereum/go-ethereum/core/state_prefetcher.go:63 +0x1e4
github.com/ethereum/go-ethereum/core.(*BlockChain).insertChain.func1(0xc004ffe6c0, 0xc0010a6000, 0x1649858, 0xa655cc1b171fe856, 0x6ef8c092e64583ff, 0xc0ad6c991be0485b, 0x21b463e3b52f6201, 0xc00127a120, 0xc007bae1f0, 0xbff9cabb62b8319f, ...)
	/work/build/_workspace/src/github.com/ethereum/go-ethereum/core/blockchain.go:1753 +0x198
created by github.com/ethereum/go-ethereum/core.(*BlockChain).insertChain
	/work/build/_workspace/src/github.com/ethereum/go-ethereum/core/blockchain.go:1750 +0x33e2
DEBUG[01-19|10:25:02.037] Sanitizing Go's GC trigger               percent=100
INFO [01-19|10:25:02.038] Maximum peer count                       ETH=200 LES=0 total=200
  • No, the node fails to sync with the rest of the network

@amalrajmani
Copy link
Contributor

@vdamle can you share the full geth log of the failed node. The log should have all the messages from the time it was restarted.

@vdamle
Copy link
Contributor Author

vdamle commented Jan 25, 2021

Hi @amalrajmani sorry for the confusion caused by my previous response - the missing parent error and the panic I reported are from two different chains (not the same block height, genesis config etc.) .

  1. For the missing parent error, here is the genesis config on the node. I have also attached log from the node in the chain where the error was encountered:
"config":{
 "homesteadBlock":0,
 "eip150Block":0,
 "eip155Block":0,
 "eip158Block":0,
  "byzantiumBlock":0,
  "constantinopleBlock":0,
  "petersburgBlock":0,
  "istanbulBlock":0,
  "isQuorum":true,
  "maxCodeSizeConfig":[{"block":0,"size":128}],
  "istanbul":{"epoch":30000,"policy":0,"ceil2Nby3Block":0},
  "chainId":1730712451
}

geth-node1.log

command line arguments in same as earlier (same blockperiod/syncmode/gcmode, different chainID)

  1. For the panic, the node logs are attached below. The genesis config is what I had posted in my response earlier:
"config":{
  "homesteadBlock":0,
  "eip150Block":0,
  "eip155Block":0,
  "eip158Block":0,
  "byzantiumBlock":0,
  "isQuorum":true,
  "istanbul":{"epoch":30000,"policy":0},
  "chainId":2104950325,
  "constantinopleBlock":3702732,
  "petersburgBlock":5122843,
  "istanbulBlock":5122843
}

panic-node1.log.gz

@amalrajmani
Copy link
Contributor

amalrajmani commented Jan 27, 2021

hi @vdamle
I am able to reproduce the issue at my end with docker. I am analysing the cause of the issue now. I will get back to you soon.

@amalrajmani
Copy link
Contributor

hi @vdamle
This issue is due to core bug in upstream. It has been fixed in geth-1.9.20.
This would occur if geth is restarted non-gracefully(kill -9 / start) when geth is running in gcmode=full and has blocks in freezerdb. When the node starts after the non-graceful stop it ends up in a scenario where there is a gap between leveldb and freezerdb. The node fails with “missing parent” error when this occurs.

If you run geth in gcmode=archive or do a graceful restart(docker stop/start. Don't use docker kill) this error won’t occur.

@vdamle
Copy link
Contributor Author

vdamle commented Feb 1, 2021

Thanks for confirming, @amalrajmani . Correct, I'm aware of the fix in Geth 1.9.20 as noted in my earlier comment: #1117 (comment)

I've checked in Quorum Slack about plans for moving to a release of Geth >= 1.9.20 and did not receive any response. I see a PR opened for moving to 1.9.8.

As one would expect, most nodes do not run as archive nodes, which makes the chances of hitting this error fairly likely, especially in an environment where nodes run with enforced with resource limits and the orchestrator such as k8s will restart (non-gracefully) nodes if bounds are exceeded. After encountering the error, there is no way to recover the node other than to initiate a full resync, which takes almost a day in long running chains (which is where this problem is likely to occur).

Do you have any estimate for when Quorum intends to incorporate a newer release of Geth that will address this issue?

@amalrajmani
Copy link
Contributor

hi @vdamle
Quorum geth upgrade releases are managed by @nmvalera and I will let him comment on that.

@ricardolyn ricardolyn linked a pull request Mar 10, 2021 that will close this issue
11 tasks
@ricardolyn
Copy link
Contributor

re-opening as we still need to validate on the latest GoQuorum master build with the upgrade to v1.9.20

@ricardolyn ricardolyn reopened this Mar 15, 2021
@ricardolyn
Copy link
Contributor

Hi @vdamle. Can you test the reproduction of this issue using the latest tag of the docker image for GoQuorum? thank you.

@vdamle
Copy link
Contributor Author

vdamle commented Mar 17, 2021

@ricardolyn - Thanks for the PRs to move the Geth version forward! I will test this in the next day or so and let you know.

@vdamle
Copy link
Contributor Author

vdamle commented Mar 18, 2021

@ricardolyn I've run into an unrelated issue using code from the latest master: https://go-quorum.slack.com/archives/C825QTQ1Z/p1616037777005000 . Would really like to resolve that before testing this, so that I don't have to test again with private transactions enabled. Will keep you posted on my progress.

@ricardolyn
Copy link
Contributor

@vdamle any update on this testing? thank you

@vdamle
Copy link
Contributor Author

vdamle commented Apr 1, 2021

Hi @ricardolyn - Apologies for the delay. I attempted to reproduce the issue with the changes in master and haven't been able to reproduce it. It seems ok to resolve this issue and re-visit with a new issue if we hit anything of this nature again. Thank you for the updates!

@ricardolyn
Copy link
Contributor

that's good news @vdamle! thank you.

we will be releasing soon this version after we finalise some validation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants