Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Review SCP nomination protocol in order to keep it safe while opening the network #236

Closed
bpalaggi opened this issue Aug 23, 2019 · 10 comments
Assignees
Labels
C. Stellar Consensus Protocol type-feature

Comments

@bpalaggi
Copy link

@bpalaggi bpalaggi commented Aug 23, 2019

Currently the nomination protocol is handled by SCP, but we have to assess the changes required (if any) that come with making the network open.

One obvious approach would be to build a filter for transactions, so we could guarantee liveness, ultimately. See #407 for more informations.

@Geod24 Geod24 added this to the 2. Validator milestone Sep 4, 2019
@Geod24 Geod24 changed the title Nomination protocol Review SCP nomination protocol in order to keep it safe while opening the network Sep 5, 2019
@Geod24 Geod24 added type-feature status-blocked labels Sep 5, 2019
@Geod24
Copy link
Contributor

@Geod24 Geod24 commented Sep 5, 2019

Depends on #209

@AndrejMitrovic AndrejMitrovic removed the status-blocked label Jun 26, 2020
@Geod24 Geod24 added the C. Stellar Consensus Protocol label Jul 5, 2020
@ferencdg ferencdg self-assigned this Apr 5, 2021
@ferencdg
Copy link
Contributor

@ferencdg ferencdg commented Apr 6, 2021

regarding #407

If I understand, we would like to guard against flood attacks, which has 2 parts: network flooding and block size swelling. The proposed solution is to use a pseudo random transaction filter to solve both problems. It does sound good, but I don't think we could come up with a function like that easily.

Reasons why we should not use transaction filters:

  1. The filter function is part of our consensus parameters, if we ever change it, we still have to retain the older version, so a catching up validator can use that for older blocks.
  2. The function is supposed to take the fee into account, but because the filter is probabilistic, it could happen that a low fee transaction is prioritized over a high fee transaction, this is neither fair, nor predictable.
  3. Because the filter function is known, an attacker could generate low fee transactions that pass the filter. In order to counter this, we would need to chain transaction hashes together when validating a block. This and many other things just increase the complexity of this filtering solution.

an alternative suggestion would be this: let's separate the two problems of flooding the network and bock size swelling.

  1. for flooding the network: we could use a simple rate limiting algorithm: transactions with lower fees have a lower rate of propagation(same idea that worked for Bitcoin).

  2. for block size swelling

    1. voting on the block size
      1. we will limit the the block size for a particular height, and let the nodes automatically vote on whether the block size should be increased or stay the same based on the contention of the previous block. If let's say only 70% of the 'total transaction fee in the pool' went into the current block, then a validator automatically vote YES on the 'increase proposal'. The result of the vote becomes part of the blockchain.

      2. There is a risk in the above algorithm, because the SCP voting phase can get stuck if 50% of the nodes vote with 'YES' and 50% vote with 'NO' to the 'increase proposal' and we might end up with zero confirmed candidate. As a solution to that the more time passes since the start of the (possibly stuck) nomination, the nodes will more likely to vote with 'YES', so they will eventually converge. From the SCP code it seems to me that during the SCP nomination process, a node that previously answered with 'NO', can change its answer to 'YES'(but changing from 'YES' to 'NO' is forbidden).

    2. nodes will chose the highest fee transactions(adjusted to its size) when proposing

This solution would be more fair, roubust and easier to implement.

Any thoughts, expecially @Geod24, @AndrejMitrovic, @omerfirmak?

@omerfirmak
Copy link
Member

@omerfirmak omerfirmak commented Apr 6, 2021

The filter function is part of our consensus parameters, if we ever change it, we still have to retain the older version, so a catching up validator can use that for older blocks.

I don't get this. Filter should be only local to the Validator. Even if we put a filter in place for foundation nodes, people can have their own modified agora running which implements a completely different filter.

The function is supposed to take the fee into account, but because the filter is probabilistic, it could happen that a low fee transaction is prioritized over a high fee transaction, this is neither fair, nor predictable.

Fairness aside, predictability is not essential IMO. Some nodes can nominate completely randomly selected TXs. IIRC what is important for consensus is how nodes combine these nominations, a comment somewhere in SCP mentions that process should be deterministic.

Because the filter function is known, an attacker could generate low fee transactions that pass the filter

I am not sure if that is good filter then :) is there a specific scenario you have in mind?

for block size swelling

I think 2.1 and 2.2 are easy to reason with. But 2.3 seems like it needs some thought put in to it. Do we really want to vote against perfectly good candidates? combineCandidates is what should probably play the biggest role in how we optimize the nominations nodes externalize.

@ferencdg
Copy link
Contributor

@ferencdg ferencdg commented Apr 8, 2021

The filter function is part of our consensus parameters, if we ever change it, we still have to retain the older version, so a catching up validator can use that for older blocks.

I don't get this. Filter should be only local to the Validator. Even if we put a filter in place for foundation nodes, people can have their own modified agora running which implements a completely different filter.

If you read #407 religiously, then at some point it reads like this:
"Blocks that have filtered-out transactions have their validators slashed;"
It makes me think that the original intention was to make the filter part of the consensus parameters. I agree that we shouldn't do that.

Because the filter function is known, an attacker could generate low fee transactions that pass the filter

I am not sure if that is good filter then :) is there a specific scenario you have in mind?

My assumption was that, the filter is part of the consensus parameters, and then it would make the attacker's job easy. Let's say the filter is that the hash of the transaction's last 2 digit should be '8888' for every transaction that has a fee less then 100BOA, in that case it is easy for an attacker to make small changes to the transaction to get '8888'(the solution to this problem would be to use the hash output from tx1 and use it when generating tx2 and an attacker cannot guess on the order of the txs in the externalized block). If we have per node filters then this is not a problem. However a 'filter' for the network can just be the rate limiting algorihm that I mentioned.

for block size swelling

I think 2.1 and 2.2 are easy to reason with. But 2.3 seems like it needs some thought put in to it. Do we really want to vote against perfectly good candidates? combineCandidates is what should probably play the biggest role in how we optimize the nominations nodes externalize.

  1. is combineCandide used at all during the nomination? From the code combineCandide seems it only has an effect if we already have a mCurrentBallot. I thought it is only during nomination when the candidate tx set can change. Or is the ballot/nomination protocol can overlap? We could certainly use the combineCandidates to get to know transactions that we don't know during nominations. I just wonder if that is related to #407.

  2. Regarding voting against perfectly good candidates: if Node A suggest a transaction set with very little fee amount(and possibly with very few transactions) compared to what Node B knows, it might be an attacker doing DoS, hoping his proposal gets through, and denies as many transaction of being finalized as possible. I would not say his proposal is perfectly good. :)

@ferencdg
Copy link
Contributor

@ferencdg ferencdg commented Apr 8, 2021

Anyone against #236 (comment) or have a better idea?

@Geod24
Copy link
Contributor

@Geod24 Geod24 commented Apr 21, 2021

The proposed solution is to use a pseudo random transaction filter to solve both problems.

Actually that was not so much to limit the size of the block, but to ensure that nodes would have a greater chance of selecting the same transactions when faced with double spend. It would also provide a conflict resolution mechanism when such situation arises nonetheless.

The filter function is part of our consensus parameters, if we ever change it, we still have to retain the older version, so a catching up validator can use that for older blocks.

This applies to all consensus parameters though. E.g. minimum fees. That's why most consensus changes are enabled at a certain height. Look up btcd to see how they handle it.

The function is supposed to take the fee into account, but because the filter is probabilistic, it could happen that a low fee transaction is prioritized over a high fee transaction, this is neither fair, nor predictable.

Well, that is completely dependent on the implementation of such filter, isn't it ? I don't think it's impossible to design it in a way that gives more priority, if not absolute priority, to a tx with higher fee.

Because the filter function is known, an attacker could generate low fee transactions that pass the filter. In order to counter this, we would need to chain transaction hashes together when validating a block. This and many other things just increase the complexity of this filtering solution.

Yeah the filter approach is undoubtedly not trivial, but I still think it's worth exploring. We have a few tools at our disposal to make this harder for attackers. For example, we have a source of randomness that cannot be tampered with...

for flooding the network: we could use a simple rate limiting algorithm: transactions with lower fees have a lower rate of propagation(same idea that worked for Bitcoin).

I think that this should be there, regardless or not there is a protocol solution.

voting on the block size

Anything that relies on voting is, by definition, more fragile than something that is part of the protocol. I'm really not convinced this is a good idea so maybe let's whiteboard about this.

@omerfirmak :

I don't get this. Filter should be only local to the Validator.

We could have a protocol-level filter that, based on various parameter on the chain, restrict what can get at a block at a certain height. It would be deterministic and help nodes converge towards the same tx set in the end. That's something I mentioned a while back, but the idea would need to be fleshed out to make it secure and exempt of DOS vectors.

@ferencdg
Copy link
Contributor

@ferencdg ferencdg commented Apr 22, 2021

Actually that was not so much to limit the size of the block, but to ensure that nodes would have a greater chance of selecting the same transactions when faced with double spend. It would also provide a conflict resolution mechanism when such situation arises nonetheless.

I think now, we have a separate mechanism to increase the chance of chosing the same double spend:

  1. node only forwards txs with 20% fee increase
  2. pool will order the double spends by increasing tx fees

Well, that is completely dependent on the implementation of such filter, isn't it ? I don't think it's impossible to design it in a way that gives more priority, if not absolute priority, to a tx with higher fee.

giving more priority is not hard to define, but isn't 'giving abslute priority' = 'sorting by fee?'(which I think we should do)

Anything that relies on voting is, by definition, more fragile than something that is part of the protocol. I'm really not convinced this is a good idea so maybe let's whiteboard about this.

How about this protocol solution for the block size problem? We start with a target size of 10MB, and a maximum size of 20MB. If we don't have protocol level minimum fee, then the block would probably be full, so we introdcue a minimum fee. After the current block is externalized, we look at our block size, and if it is above the target size, then we will increase our target block size and max block size for the next block. This requires no voting.

@Geod24
Copy link
Contributor

@Geod24 Geod24 commented Apr 22, 2021

node only forwards txs with 20% fee increase

But if two nodes have conflicting txs, with the exact same fee, which one wins ?

@ferencdg
Copy link
Contributor

@ferencdg ferencdg commented Apr 22, 2021

node only forwards txs with 20% fee increase

But if two nodes have conflicting txs, with the exact same fee, which one wins ?

If your question is regarding which will be externalized, then I think any one of them could be. None of nodes will vote agains the other node's candidate just because it has the same or even lower fee.

@ferencdg
Copy link
Contributor

@ferencdg ferencdg commented May 3, 2021

done by #1917 and #1918

@ferencdg ferencdg closed this as completed May 3, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C. Stellar Consensus Protocol type-feature
Projects
None yet
Development

No branches or pull requests

5 participants