New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SIP-001 Burn Mining and Leader Election #888

Open
wants to merge 18 commits into
base: develop
from

Conversation

Projects
None yet
3 participants
@kantai
Copy link
Member

kantai commented Nov 14, 2018

First draft for STIP-001 -- high level discussion of burn-mining design as it applies to (1) leader election and (2) chain selection.

_cannot_ be the exact preceding chain tip, because a key advantage of
using leader election is that a block may be _streamed_ -- namely, a
leader is continuously confirming new transactions as long as their
tenure lasts. So at a minimum, the commitment will be 1-block old.

This comment has been minimized.

@jcnelson

jcnelson Nov 14, 2018

Member

I was thinking the leader could do both. They commit to a chain tip when they announce their burn in the sliding window, and the commit to a descendant of that chain tip when they start their tenure. The former commitment stops the leader from committing to any earlier microblock no matter how much they burn. If this chain tip is stale, then the leader who burned it must pay a fork penalty for it to be accepted. The latter commitment is used to decide (1) where in the microblock stream the new leader will begin its tenure on its fork, and (2) how much of a fork penalty to assign to the entire chain they built off of.

In other words, there's two kinds of fork penalties. First, there's a penalty a leader pays when the burn that is determined by how far back in the block stream they're committing (the penalty can be assessed by decreasing the likelihood that they will be selected as a leader). For example, if leader L commits at block N to a microblock from block N - 1, then the weight of L's burn decreases by O(1/2). If it's N - 2, then the burn weight decreases by O(1/4), and so on. Second, there's a penalty that the coalition of leaders on the same fork must eventually pay off in order for their fork to become the main fork. These penalties are added whenever a leader commits to a stale microblock when their tenure begins. These penalties can be assessed by increasing the amount of burns the chain must make in addition to what it takes to reach 50% of the burns in order to be considered the main chain.

This comment has been minimized.

@kantai

kantai Nov 14, 2018

Member

True -- but even in this case, there is a period between when the leader is known to be elected, and when they actually begin their block. The shorter that window, the less opportunity for malicious behavior.

This comment has been minimized.

@kantai

kantai Nov 14, 2018

Member

Thoughts on calling this a "lame-duck period"?

This comment has been minimized.

@jcnelson

jcnelson Nov 14, 2018

Member

I had a comment below on this. Can this be resolved if we make it so that there's a delay before a new fork takes effect?

double-spend attacks. It is important to note that this trade-off
exists regardless of fork selection algorithm--- any fork selection
strategy will allow for a leader to know _a priori_ whether a
malicious fork is likely to be successful.

This comment has been minimized.

@jcnelson

jcnelson Nov 14, 2018

Member

Hmm. What if we made it so that the act of paying off the fork penalties didn't take effect until after an amount of time equal to the time between block N' and the tenure begins passes? For example, if the burn window is 144 blocks behind the chain tip, can we make it so the result of paying all outstanding fork penalties doesn't take effect until 288 blocks later?

This comment has been minimized.

@kantai

kantai Nov 15, 2018

Member

We need to think about the remediation techniques here a little more carefully -- I'm still unsure about how a delayed-forking scheme would work.

# Fork Selection

Fork selection in the Stacks blockchain requires a metric to determine
which chain, between two candidates, is the "heaviest" or "longested"

This comment has been minimized.

@jcnelson

jcnelson Nov 14, 2018

Member

What does "longested" mean? ;)

@moxiegirl

This comment has been minimized.

Copy link
Contributor

moxiegirl commented Nov 16, 2018

@kantai @kantai Have you thought about maybe inviting one or two people from the community to review this?

@kantai

This comment has been minimized.

Copy link
Member

kantai commented Nov 16, 2018

Community member feedback on this would be great, which is why we're trying to get this discussion into public PRs.

@moxiegirl

This comment has been minimized.

Copy link
Contributor

moxiegirl commented Nov 16, 2018

@kantai Thought so...just that GitHub notifications are pretty noisy...so maybe we should have a list of "key PRs" underway...or we have that already? ...let me go look.

kantai added some commits Nov 28, 2018

@kantai kantai changed the title Draft of stip-001 SIP 001 - Burn Mining and Leader Election Nov 30, 2018

@kantai kantai changed the title SIP 001 - Burn Mining and Leader Election SIP-001 Burn Mining and Leader Election Nov 30, 2018

The above discussion outlined a few major open questions:

1. How do we set the _election window?_ This is the time between when
an election is finalized, and the leader begins broadcasting.

This comment has been minimized.

@jcnelson

jcnelson Dec 4, 2018

Member

The election window is variable-sized, and can grow if a minimum burn threshold is not met.

The Stacks blockchain needs to have a notion of "burn difficulty" in order to safely tolerate sudden drops of miner inactivity. A sudden drop in burns can make it very cheap for an attacker to initiate and execute a reorg, so the decrease must happen over a long time period but without giving up too much liveness.

Our strategy is to implement a "sawtooth" negative feedback loop for responding to increases and decreases of miner activity. Miners have to meet an additively-increasing burn threshold to make progress, but the protocol multiplicatively reduces the burn threshold in response to miners dropping off. The key idea is that if the burn threshold gets too high, the protocol should quickly adjust it back to an acceptable (but non-zero) level that entices would-be miners back to mining.

To achieve this, the election window has a well-defined "minimum burn" that must be met within the window for a leader to be elected in the next epoch. This minimum burn is simply a fraction of the election window's observed burn (e.g. 80% of the sum of the burns over the window). This minimum burn increases in response to the observed burn as more and more Bitcoin blocks arrive that contain more and more burns. However, it does not decrease in this manner. Instead, the election window grows to include more and more blocks so that the minimum burn is met by its observed burns. Once it is met, another leader can be elected, the election window "snaps back" to its original size, and the minimum burn is multiplicatively decreased by some fraction (e.g. multiplied by 0.8).

The observed burn in the election window may decrease if Bitcoin blocks arrive that have fewer and fewer burns, or no burns at all. If a Bitcoin block arrives that causes the election window's observed burn rate to go beneath the minimum burn, then (1) no leader is selected for this block's epoch, and (2) the election window expands to additively include this block's burns instead of shifting. Over time, the election window expands to include Bitcoin blocks until the minimum burn is met -- at this point, a leader is chosen from the enlarged election window, and the window "snaps back" to its original size. The minimum burn is decreased multiplicatively if the resulting election window has a lower observed burn -- for example, it could decrease by 20%. If the resulting election window has an equal or higher observed burn, the minimum burn is set to the fraction of the window's new observed burn.

For example, suppose the election window is 2 blocks, and the sum of the observed burns is 1 BTC, and the minimum burn is 0.8 BTC. Suppose the two blocks in the window B1 and B2 have 0.4 BTC and 0.6 BTC, respectively. If a block B3 arrives with zero burns, then the window would slide from [B1, B2] to [B2, B3] and would have an observed burn of 0.6 + 0 = 0.6 BTC. This is lower than the minimum burn, so no leader is elected for block B3. If block B4 arrived after B3 with 0.1 BTC of burns, then the window would expand to [B2, B3, B4] with an observed burn of 0.6 + 0 + 0.1 BTC = 0.7 BTC. No leader would be elected for block B4. If block B5 arrived after B4 with 0.25 BTC of burns, then the window would grow to [B2, B3, B4, B5] with 0.6 + 0 + 0.1 + 0.25 = 0.95 BTC. This exceeds the minimum burn of 0.8 BTC, and the window would snap to [B4, B5] with an observed burn of 0.1 + 0.25 = 0.35 BTC. A leader would be elected by sampling the burns in [B2, B3, B4, B5] and mining would resume. Since this new observed window [B4, B5] has a lower amount of burn than the previous minimum burn of 0.8 BTC, the minimum burn decreases multiplicatively by 20% to 0.64 BTC. The process then repeats -- if the next block B6 has over 0.29 BTC burns, then the window slides down per usual (since the minimum burn is met), and if it does not, the window expands to include it as described.

3. How do we weight burns over time? Does an exponential weighting
function achieve desired properties (and what is the trade-off we
are making here?) How should we tune these parameters?

This comment has been minimized.

@jcnelson

jcnelson Dec 5, 2018

Member

The longest valid chain wins, where "longest" means "has the most blocks." Over long enough time scales, this will also be the chain that has the most burns.

The problem we are trying to solve is to decide which of a set of valid chains is the "best." The behavior we want is for the "best" chain to be the chain that an overwhelming majority of miners compete to produce blocks on -- we want miners to reap the rewards of making progress on the already-best chain whenever possible. We achieve this end by rewarding miners for building off of the chain with the most blocks, since blocks are a proxy for Stacks economic activity.

An alternative is to consider a chain to be "best" only if it is valid and has the most cumulative burns. We don't encourage miners to build off of the most widely-used chain by allowing anyone to make a large burn and produce a shorter chain that is considered "better" while invalidating a lot of already-confirmed blocks. Such chains should not be considered "better" precisely because doing so would enable this unintuitive behavior, and would allow malicious whales to reduce the system's liveness and hurt honest miners.

Nevertheless, the longest valid chain is also guaranteed to be the chain with the most burns over time, since an honest majority coalition of miners will all compete to be selected to be the next minder by burning higher BTC than their peers. The side-effect of competing to mine on the longest chain is that the sum of the burns on that chain will also (over time) exceed the sums of burns on minority forks.

The reason we can get away with only considering chain length for quality is because PoB miners enjoy two advantages over PoW miners: access to a global clock (via the PoW chain's block arrivals), and an arbitrarily-low barrier to entry for participating in the protocol. In PoW chains, the act of mining serves two purposes: to start new epochs, and to enhance chain quality. As such, a PoW chain's fork-choice rule considers the chain with the most valid proof-of-work to be the best chain.

In PoB chains, the act of mining only enhances chain quality, since the mechanism that starts and ends epochs is outside the PoB miners' control. This means that chain quality is not directly dependent on the amount of tokens burned; it's only dependent on the number of blocks produced on the same fork. The only way for a PoB minority fork to overtake the majority fork is for the majority of miners (by burn) to produce blocks on the minority fork until its length exceeds the majority fork. Since miners are competing in each epoch to be selected as the next miner (meaning that they burn a non-zero amount of tokens), over time the new majority fork will have more burns than the old majority fork.

It's important to remember that using chain length as the quality metric still allows the system to tolerate deep forks (such as to recover from a catastrophic network crash). However, this enables two nice properties of chain reorganizations: (1) anyone can "see a reorg coming" because all blocks produced are anchored to the same history on the burn chain, and (2) deep forks take a long time to carry out. These are particularly useful properties for a young PoB chain, because it increases the barrier to entry for a powerful malicious miner who can afford to out-burn a fledgling chain's miners. I think this will prove useful for app-chains, for example.

jcnelson added some commits Dec 13, 2018

@jcnelson

This comment has been minimized.

Copy link
Member

jcnelson commented Dec 18, 2018

Hey @kantai, can I get your review on the current version of SIP 001?

@kantai

This comment has been minimized.

Copy link
Member

kantai commented Dec 18, 2018

Yes -- I plan to read it today.

jcnelson added some commits Dec 18, 2018

@kantai

This comment has been minimized.

Copy link
Member

kantai commented Dec 19, 2018

Awesome, thanks for updating this Jude! I pushed a couple of nitpicks.

I had a thought regarding the burn commitments while reading this -- I believe the important thing to commit to is the chaintip that the would-be leader builds off (that determines whether or not the leader is attempting to reorg/self-orphan), not necessarily the contents of the block being appended. This means that just committing to the chaintip would be a sufficient requirement to punish a leader who attempts, but fails, to reorganize the chain. This means that we could support something like the streaming model which would reduce latency and the bandwidth requirement of propagating runner-up's blocks. Does that seem correct to you @jcnelson ?

kantai and others added some commits Dec 19, 2018

@jcnelson

This comment has been minimized.

Copy link
Member

jcnelson commented Dec 20, 2018

I think you're right on the point about leaders having to commit specifically to chain tips instead of chain tips and data. But to be clear, there's exactly one block created per epoch in both the batch and streaming models, and nodes will still be required to relay blocks that won sortition in a prior epoch but may not be on the canonical chain. I don't think either model requires propagating runner-up blocks unless Bitcoin itself re-orgs and a runner-up becomes the winner (in which case, both approaches need to relay runner-up blocks).

I would very much prefer to use the streaming model over the batch model if they have equivalent incentive models, since it offers a much better user experience. I think the fact that we use a reward window and a 40/60 transaction fee split makes them equivalent enough. But, unlike the batch model, the following extra degrees of freedom are available to leaders in the streaming model:

  • Because a streaming leader has to stay online for its tenure, it becomes a DDoS target once it comes online. This isn't true for the batch model -- the leaders in this model only have to be online long enough to send their block. However, I think the fact that all leaders share their block rewards means that leaders at least have an incentive to help active leaders stay online (including helping them relay microblocks and offering them anti-DDoS services).

  • Because a streaming leader decides which microblock to build off of, a streaming leader basically decides which data the previous leader actually sent. This isn't true in the batch model -- the batch leader has to build off of an entire block no matter what. The 40/60 transaction fee split helps encourage rational leaders to build off the latest microblock, but a dishonest leader now has a variable price-point at which someone could bribe them to partially orphan the last block (i.e. someone could bribe at a per-transaction rate). The batch model raises this price point to the whole block reward. The effectiveness of bribing could be reduced by honest leaders mining high-value transactions later in their tenure, but this in turn could encourage bribers to DDoS the leader towards the end of their tenure in order to make their bribe cost lower (and this would hurt users who are willing to pay the most for the miner's services).

What about a hybrid strategy, where the leader commits to some transactions (specifically, high-value ones) and streams low-value transactions opportunistically?

  • When not active, a leader monitors the mempool for high-value transactions
  • When a leader submits its candidacy, it commits to two things -- a chain tip, and the Merkle root of the sequence of highest-value transactions they must mine.
  • When selected, a leader streams microblocks per usual, but it also tags the transactions it committed to in the Merkle root so other nodes can validate the stream against the commitment.
  • If the leader successfully streams all of the transactions it committed to, then the block is valid. If it's missing one, then the entire block is invalid and all leaders lose out on the block rewards for this epoch.

I think this hybrid strategy fixes the two problems above. Because a leader agrees to unconditionally mine certain transactions before its tenure starts, the leader is able to prevent a future leader from orphaning them without also orphaning the entire block. This raises the bribe price for orphaning a block to the coinbase plus a large-ish fraction of the transaction fees. It also gives users a way to increase the certainty that their transaction will not get orphaned by paying a higher fee -- i.e. committing to a transaction would be a value-add service for miners to sell to users. What do you think?

@kantai

This comment has been minimized.

Copy link
Member

kantai commented Dec 20, 2018

I like this hybrid approach! It basically says "if you want to include my epoch in your chain, you must at least include these transactions." I think honest leaders would be incentivized to include as many mempool transactions as they are aware of in that "committed" set. Then, only if new transactions enter the mempool during their epoch, they'd stream those.

talk about a hybrid approach to streaming and batching transactions; …
…talk more about fork selection rules and how the sortition algorithm does high-pass and low-pass filtering on burns
@kantai

This comment has been minimized.

Copy link
Member

kantai commented Jan 4, 2019

If the burn rate increases too quickly, then a few rich leaders can quickly dominate the sortition process (and rewards) and effectively take over the chain before other participants have had a chance to react. A set of rich leaders could burn a large amount of burn tokens to produce an alternative fork in only a few rounds (on the order of the depth of the fork) and maintain it as long as they have at least 51% of the burn capacity. Also, if there are too many commitment transactions or burn transactions in the burn chain mempool, a "burn collapse" event can result whereby a legitimate commitment transaction is prevented from being mined and the Stacks blockchain stalls for the epoch.

I'm not sure how setting an upper limit on total burns solves this situation--- in the above situation, the rich leader can simply split their burns among multiple keys and crowd out other users' burns.

@jcnelson

This comment has been minimized.

Copy link
Member

jcnelson commented Jan 12, 2019

I'm not sure how setting an upper limit on total burns solves this situation--- in the above situation, the rich leader can simply split their burns among multiple keys and crowd out other users' burns.

The wording here could be improved. The overall point in this section that I'm trying to make is that we want a deep fork that arises from a sudden change in burn rate to take O(h + d) epochs to take effect, where the fork length is h blocks and d is a function of the burn rate change. The key idea is that we want to give the rest of the network to have an extra d epochs to react to the fork if the burn rate suddenly increases or decreases drastically. I agree with your point that leaders can simply spread out their burns.

elaborate more on the purposes for burn quotas -- they make deep fork…
…s take longer in the event of wild burn rate fluctuations
@kantai

This comment has been minimized.

Copy link
Member

kantai commented Jan 14, 2019

The wording here could be improved. The overall point in this section that I'm trying to make is that we want a deep fork that arises from a sudden change in burn rate to take O(h + d) epochs to take effect, where the fork length is h blocks and d is a function of the burn rate change. The key idea is that we want to give the rest of the network to have an extra d epochs to react to the fork if the burn rate suddenly increases or decreases drastically. I agree with your point that leaders can simply spread out their burns.

But wouldn't this not be a problem anymore, because we're weighting competing chains by number of epochs, rather than burn-weight? You'd still want a minimum burn rate (to prevent easy forks), but a maximum would no longer be necessary, because the fork depth would be rate limited by Bitcoin.

@jcnelson

This comment has been minimized.

Copy link
Member

jcnelson commented Jan 14, 2019

But wouldn't this not be a problem anymore, because we're weighting competing chains by number of epochs, rather than burn-weight? You'd still want a minimum burn rate (to prevent easy forks), but a maximum would no longer be necessary, because the fork depth would be rate limited by Bitcoin.

My thinking was that we'd want to make it hard for deep forks to emerge by imposing a maximum burn rate on non-canonical forks. In particular, we would cap the probability that a non-canonical fork would be selected no matter how much it burns. Not sure this is necessarily a good idea though, since it also makes it hard for an honest coalition to regain control of the chain.

@kantai

This comment has been minimized.

Copy link
Member

kantai commented Jan 14, 2019

My thinking was that we'd want to make it hard for deep forks to emerge by imposing a maximum burn rate on non-canonical forks. In particular, we would cap the probability that a non-canonical fork would be selected no matter how much it burns. Not sure this is necessarily a good idea though, since it also makes it hard for an honest coalition to regain control of the chain.

Right, but my question was maybe due to a misunderstanding. I thought that we had changed the fork selection criteria to be "select the fork with the most epochs" instead of "select the fork with the most burn weight".

@jcnelson

This comment has been minimized.

Copy link
Member

jcnelson commented Jan 14, 2019

Yes -- the fork selection criteria is the longest valid fork by number of epochs (not burn weight). However, a fork's quality is still linked to the burn rate it receives -- a fork can only become the canonical fork if its leaders win sortition more often than any other fork (with high probability).

Now, suppose a rich "whale attacker" comes online and starts trying to execute a deep fork off of a chain tip that is (currently) h blocks in the past. Let's say the whale attacker burns enough Bitcoin that it has a 90% chance of winning sortition each time. Then, the attack chain can overtake the main chain after about 1.11h epochs if burn rates hold steady. But if h is small, then the attacker has a good chance of overtaking the main chain before the rest of the network participants can even get a block in. This is particularly bad if honest actors use the chain itself to propagate warnings about pending network disruption (recall that users can see the attack coming), or if the chain's miners are in the process of voting to upgrade the system.

The case for capping the maximum burn on non-canonical forks is that it basically prevents the rich attacker's chain from quickly overtaking the main chain when h is small. The attacker would not be able to start the attack with a 90% chance of winning sortition -- they would instead start with e.g. a 50% chance at most. They would eventually overtake the main chain, but capping their burn rate buys the current leaders more time to react.

@kantai

This comment has been minimized.

Copy link
Member

kantai commented Jan 14, 2019

The case for capping the maximum burn on non-canonical forks is that it basically prevents the rich attacker's chain from quickly overtaking the main chain when h is small. The attacker would not be able to start the attack with a 90% chance of winning sortition -- they would instead start with e.g. a 50% chance at most. They would eventually overtake the main chain, but capping their burn rate buys the current leaders more time to react.

How is that enforceable though? Doesn't it just change the game from "burn the most" to "crowd out others' transactions the most", in which case, whales are just as (if not even more) able to virtually guarantee success by spamming high fee transactions.

@jcnelson

This comment has been minimized.

Copy link
Member

jcnelson commented Jan 14, 2019

How is that enforceable though? Doesn't it just change the game from "burn the most" to "crowd out others' transactions the most", in which case, whales are just as (if not even more) able to virtually guarantee success by spamming high fee transactions.

Yes, but this would be true no matter what we do.

What I'm trying to figure out is a way to slow down a rich attacker working on a non-canonical fork, no matter what the activity is on the canonical fork. Maybe this isn't a problem worth working on beyond requiring a deep fork of h blocks to take O(h) epochs to produce (which the fork choice rule already guarantees).

jcnelson added some commits Jan 16, 2019

to help realize a close approximation of this payout distribution.

## Sharing the rewards among winners

This comment has been minimized.

@kantai

kantai Jan 18, 2019

Member

Is this example a correct interpretation of this reward sharing scheme:
if there are three epochs, and a "reward window" of size 2, at time t = n + 1,
the first window rewards:

the leader of Epoch 1 proportional to their burns over Epoch 1 and Epoch 2
the leader of Epoch 2 proportional to their burns over Epoch 1 and Epoch 2

And the amount being rewarded here is the coinbase and the
transaction fee ? I think for this to work with the way that the
40/60 split works with streamed transactions, we'd need a more specific scheme.
Because if a miner derives equalish benefit from
fees in the next block, they're incentivized to try and reorg a
microfork for heavy transactions, so that they get the 40% and
the 60%. I think we need it to be the case that a leader of epoch
n receives no reward for the following epochs, so a window
could still be rewarded, but it'd be something like, the rewards
for Epoch N are distributed amongt the leaders of Epoch N, Epoch
N+1,... Epoch N+k.

This comment has been minimized.

@jcnelson

jcnelson Jan 18, 2019

Member

I like that -- the rewards for epoch n are distributed amongst leaders of epochs n..n+k. However, I think it might be safer if transaction fees for streamed transactions are not distributed via a reward window, but are instead given directly to the leader in a 40/60 split. What do you think?

security method for the blockchain implies a direct metric: the total sum of
burns in the election blocks for a candidate chain. In particular, **the Stacks
blockchain measures a fork's quality by the total amount of burns which _confirms_ block _N_** (as
opposed to the amount of burn required for the _election_ of block _N_).

This comment has been minimized.

@kantai

kantai Jan 18, 2019

Member

I don't think the above paragraph matches a "longest valid fork" rule --- my understanding of the longest valid fork rule was to simply count epochs, regardless of how much was burned in any particular epoch. That's why we wanted to have a minimal burn amount for epochs.

This comment has been minimized.

@jcnelson

jcnelson Jan 18, 2019

Member

Yeah, that's a typo. We want "longest valid fork" not "fork with the most burns."

jcnelson added some commits Jan 18, 2019

break down the reward distribution better; talk more about how leader…
…s can get rewarded for committing to a chain tip even if they don't win sortition; have the leader block commit indicate its intended epoch number
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment