-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow grouping of validators #2601
Comments
I agree that automating the failover is a very useful feature. How do you propose distinguishing
|
Until any validator is actually responding, you can't distinguish 1 and 2 currently too. If |
So if you have heard from |
That's a problem in general - if these were independent the same situation could arise. |
I agree that this is a problem in general. However, for a single UNL with Granted, |
Consensus is relatively fork-safe anyways, the bigger threat is not making forward progress. Also individual servers don't have any way of even querying the global state necessary to calculate these thresholds (#1751 recently celebrated its 2nd birthday with no reaction whatsoever). Even if there was some information, it would be relatively easy to feed someone false information designed to interrupt them. Consensus doesn't really take into account the global state, I don't see that this would change it or how this proposal would require stronger guarantees. If anything, it would help with network stability. |
@ChronusZ is this worth pursuing? If not, let's close this issue. |
I think Brad is concerned about a situation where |
Even if all nodes share the same UNL and agree on the grouping of validators in that UNL, there are semi-practical attacks that a Byzantine validator group could execute taking advantage of this mechanic to send contradictory validations without accountability. Now the consensus algorithm remains safe without the assumption of Byzantine accountability as long as the number of Byzantine validators does not go above 20%, but with the current algorithm we have a nice soft safeguard because even with >20% Byzantine validators, forking the ledger requires extremely careful control over the p2p network to avoid being immediately identified as faulty. |
From the perspective of With nUNLs I would actually expect |
My understanding of your suggestion was that all A validators are treated as having validated whatever ledger was validated by the lowest-index A-validator from whom we received a validation. Is the logic you're actually suggesting as follows?: After the timeout, let If so, this scheme still has a similar exploit, although it's slightly harder to enact. |
That's just "normal" byzantine behavior though? From the perspective of The idea in general is that I would like to move from "An UNL contains a list (actually set...) of validators" to "An UNL contains a list (actually set) of node operating entities with their actual validators as sub-lists/sub-sets" with some easy to understand rules how to resolve eventual conflicts within the nodes of a single operator (e.g. "take the first one in the list", "take the first one that actually arrives at my node", "take the majority within that operator" or even "take a random one"). |
It's way worse with validator grouping. Without grouping, |
I still fail to see how this would present any issue or be a safety threat, maybe the example is too simple or the explanation what happens unclear? With the validators in the original issue: Alice operates 3 validators (A1, A2 and A3) Now 2 validators or nodes D1 and D2 have 2 different UNLs: D1 is grouped (3 operators: |
Ok, then in that case the UNL overlap between Let's consider an example where there is a single UNL Thus with your proposal, even if all nodes agree on the UNL and its grouping structure, then the network can fork in the event of (1) an extremely rare accident even with all nodes behaving honestly, (2) an adversary with strong control over the p2p network even with all validators on the UNL behaving honestly, or (3) an adversary controlling 60% of the UNL (namely the A,B,C validators) with no significant control over the p2p network. In (2) and (3) the attack can be executed while maintaining plausible deniability for the attacker. Note that in the current algorithm, even an adversary controlling 100% of the validators cannot fork the network while maintaining plausible deniability. |
Thanks, that example is much clearer to me. Of course this can be pushed further towards case 1 with various methods, but that just makes it harder or take longer to exploit, not impossible. 🤔 |
One option might be to require all/most configured validators to actually cast a vote for something and then just drop some when calculating the outcome. One could also require validators from an operator to have a (simple?/super?) majority among themselves, with the current option of a single validator being the trivial case. Still sounds a bit too hand-wavy for my liking, but I still think the problem is a relevant one unless there is a global agreement between validator operators to always operate at least a certain number of validators and add the same number per operator to recommended UNLs. |
True, I guess if you count a validator group as unresponsive until you receive validations from a proper majority (i.e., strictly greater than 50%), then you at least avoid the issue of deniable Byzantine behavior when everyone agrees on the same UNL with the same grouping structure. Actually this new functionality can be achieved without making any direct changes to the consensus mechanism. Say there is an entity I guess we would still need to modify the p2p code to give a way for nodes to combine the threshold signatures to produce a single validation for the group. But not all p2p nodes would need to have that amendment enabled; the validation shares would be passed around like ordinary validations until at least |
I'd like to propose the following feature:
Instead of weighing each single validator on a UNL by the same weight, I'd like to be able to weight lists of validators by the same weight.
Example:
Alice operates 3 validators (A1, A2 and A3)
Bob runs 2 of them (B1, B2)
Charlie runs only one (C)
Currently I can only add one of the A's one of the B's and the C validator to an UNL, e.g.
[A1, B2, C]
I'd like to be able to have an UNL like this:
[[A1, A2, A3], [B1, B2], C]
From each sub-list the first validator would be considered (in this example, even if A2 and A3 disagree with A1, as long as a validation from A1 reaches my node, it would count that). A different approach might be to weigh all sub-validators the same but in relation to the global UNL (all the
A
validators are weighed 1/9, theB
s 1/6 and theC
one 1/3). That would be probably closer to what might be expected, but might lead to more churn/work.Anyways: Grouping validators together for failover or just because some entity might choose to run more than one is a useful feature to have and would be also helpful for decentralization efforts.
The text was updated successfully, but these errors were encountered: