Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

could not verify slot claim VRF proof #2682

Closed
kishansagathiya opened this issue Jul 19, 2022 · 9 comments · Fixed by #2709
Closed

could not verify slot claim VRF proof #2682

kishansagathiya opened this issue Jul 19, 2022 · 9 comments · Fixed by #2709
Assignees

Comments

@kishansagathiya
Copy link
Contributor

kishansagathiya commented Jul 19, 2022

Describe the bug

  • Run two substrate nodes (using substrate node template) and one gossamer node
  • Wait for a few blocks somewhere between 40 to 100. You sometimes see
2022-07-19T16:36:37+05:30 EROR block data processing for block with hash 0xe5956255bd91183531280c4b69209787697ff3b09b6ec1098887b1326dda2c2d failed: failed to verify pre-runtime digest: could not verify slot claim VRF proof	chain_processor.go:L86	pkg=sync

Log output

https://gist.github.com/kishansagathiya/b951113719a5bc62748e7e622b5342e5

@EclesioMeloJunior
Copy link
Member

EclesioMeloJunior commented Jul 19, 2022

Could you run the same setup with babe logs as trace and check if some peer produces a block in the wrong slot?

@kishansagathiya
Copy link
Contributor Author

kishansagathiya commented Jul 20, 2022

Could you run the same setup with babe logs as trace and check if some peer produces a block in the wrong slot?

Look into the logs I have shared. All gossamer logs are at trace level already. Also, logs don't suggest anything about wrong slot. I will continue more investigation some day when/if i assign this to myself.

@danforbes danforbes added the BABE label Jul 26, 2022
@kishansagathiya
Copy link
Contributor Author

Putting interesting logs
In one gossamer node

2022-07-27T15:00:46+05:30 INFO built block 31 with hash 0x9c0141a0595d6c3aa939323baa91d8bcd82dbf0800d4b4a3df89cce2a3846560, state root 0x8dabc5b92355e0411263b1d0d60c2fbe49e0b256f7b42bce8ad0115e23b254b2, epoch 0 and slot 414728561	babe.go:L541	pkg=babe
2022-07-27T15:00:46+05:30 EROR block data processing for block with hash 0x9c0141a0595d6c3aa939323baa91d8bcd82dbf0800d4b4a3df89cce2a3846560 failed: block producer equivocated	chain_processor.go:L86	pkg=sync

Sometime later in other gossamer node

2022-07-27T15:01:34+05:30 INFO built block 36 with hash 0x066be3f029db5c5362e09a0fc1c3ea432fa9c199ffb62b69a88ed729a7b604b6, state root 0x6a8bcb7a0ce8a6714c8de87d007c32342095b25f12ce4db347f5f763836e6c4f, epoch 1 and slot 414728573	babe.go:L541	pkg=babe

2022-07-27T15:01:34+05:30 EROR block data processing for block with hash 0x066be3f029db5c5362e09a0fc1c3ea432fa9c199ffb62b69a88ed729a7b604b6 failed: failed to verify pre-runtime digest: could not verify slot claim VRF proof	chain_processor.go:L86	pkg=sync

@kishansagathiya kishansagathiya self-assigned this Jul 27, 2022
@kishansagathiya
Copy link
Contributor Author

kishansagathiya commented Jul 27, 2022

So, what is happening over here is that two blocks of same block number are getting created by the network.

Say, say latest block is nth block and you have only one nth block. handleSlot would consider this best block and create a new block with nth block as parent. We would be authoring (n+1)th block.

Now, say by the time next slot comes up you have receive one more block with is also nth block. And using fork-choice rule we pick the best block and that best block happens to be the new nth block that we received. And now we would be authoring another (n+1)th block but with a different parent.

Let' say both (n+1)th blocks were authored as primary author. Now, since we authored two blocks with the same number and same method, this would throw block producer equivocated and we would eventually also see could not verify slot claim VRF proof.

I however have to check what does the protocol expect here.

  • Should the authority ensure that they are authoring only one block? That would mean that we consider such blocks with same number equivocatory and throw error like we don now. or
  • We don't consider such block equivocatory and let grandpa resolve forks.

@EclesioMeloJunior
Copy link
Member

EclesioMeloJunior commented Jul 27, 2022

But if we already had created a block (n+1)th in front of nth block and then received another nth block we should pick (n+1)th as the best block number

Given we contain only the latest block Nth we should create the block (N + 1)th

N -> N + 1 

In the case we receive one more block with is also Nth block we should not pick it as the best block but the (N + 1)th

   / -> N
...  -> N -> N + 1

May the fork choice rule not work correctly?

@kishansagathiya
Copy link
Contributor Author

But if we already had created a block (n+1)th in front of nth block and then received another nth block we should pick (n+1)th as the best block number

Oh, yeah

May the fork choice rule not work correctly?

Might be the case

@kishansagathiya
Copy link
Contributor Author

just went through the way we select best block header. did not really see any problem with that code.

But I can think of a scenario where this case happen even if BestBlockHeader works properly.

Say, you are alice and you have n blocks and nth block is authored by bob as a secondary author. Currently this nth block is the best block. We handle the next slot and author (n+1)th block as a secondary author.

During this time, you find that some other peer has authored nth block as a primary author. Now best block would the new nth block, because that chain would have more primary blocks. And now if we handleslot, we would be authoring another (n+1)th block.

Does that make sense?

@kishansagathiya
Copy link
Contributor Author

kishansagathiya commented Jul 28, 2022

I have asked for spec clarification, but we could add one more condition currentHeader.ParentHash.Equal(header.ParentHash) to the check of equivocatory blocks and avoid this error.

@EclesioMeloJunior
Copy link
Member

EclesioMeloJunior commented Aug 2, 2022

Say, you are alice and you have n blocks and nth block is authored by bob as a secondary author. Currently this nth block is the best block. We handle the next slot and author (n+1)th block as a secondary author.

0 -> 1 -> 2 ... -> N -> N + 1

During this time, you find that some other peer has authored nth block as a primary author. Now best block would the new nth block, because that chain would have more primary blocks.

              / -> N
0 -> 1 -> 2 ... -> N -> N + 1 

And now if we handle slot, we would be authoring another (n+1)th block.

              / -> N -> N + 1 (CHAIN A)
0 -> 1 -> 2 ... -> N -> N + 1 (CHAIN B)

The process you described is not wrong, the blocks created by Alice are in different chains so it is possible for a BABE author to create blocks with the same number since those blocks are created in different chains. The main point is: the slot for every block should increase, so as the block N+1 in chain B was created before the block N+1 in chain A the slot number for block N+1 in chain A should be greater than the slot number for block N+1 in chain B

kishansagathiya added a commit that referenced this issue Nov 2, 2022
…more than once (#2709)

The earlier block equivocation check logic was wrong. 
Equivocation for block producers is defined as claiming the same slot more than once. Earlier we were checking for different conditions.
This commit re-writes block equivocation logic to throw ErrProducerEquivocated, only when we see an author produce multiple blocks in the same slot.

Fixes #2682
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants