Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce minimum fee variability #70

Open
UkoeHB opened this issue Feb 16, 2020 · 46 comments
Open

Reduce minimum fee variability #70

UkoeHB opened this issue Feb 16, 2020 · 46 comments

Comments

@UkoeHB
Copy link

UkoeHB commented Feb 16, 2020

STATUS: The solution in this comment appears to resolve the concerns raised by this issue.

NOTE: Starting at this comment the discussion shifted from the original topic to questions about scaling and network stability.

As many people know, the fee algorithm has gone through various iterations. At one point, JollyMort performed an extensive analysis which led to our dynamic fee algorithm based on a 'reference' transaction, and since then long term block weights have been added to mitigate the big bang attack. I believe there is still room for improvement.

Overview of current algorithms

Please see Zero to Monero: Second Edition, sections 7.3.2, 7.3.3, and 7.3.4. As I understand it, @ArticMine is the architect of those algorithms.

Problem to be solved

Imagine adoption is humming along and the long term median gets up really high, say 100x the current default penalty free zone (300kB, so we'd be at 30MB blocks [sic]). Suddenly catastrophe hits, and several major sources of transaction volume are taken offline (maybe some darknet markets were attacked by law enforcement, or perhaps it's world war 4 and some legal marketplaces get hit by EMPs). Short term transaction volume collapses to 5x the default penalty free zone, and now minimum fees are abruptly 20x higher. This is because per byte fees are divided by the minimum of short and long term medians.

Transactions built before the short term median falls will be using the older fees, which will now be inadequate, and those tx will get stranded, even if they are using the 'normal' fee priority which is 5x the minimum fee.

A similar thing could happen with much smaller volume shocks, to any transaction using the lowest fee priority (e.g. 1x the minimum).

It's not a problem if transaction volume never really exceeds the default penalty free zone, but design of the fee algorithm has always been forward looking so I don't see any reason to hold back.

Solution (OUTDATED, go to this comment)

Simply, make the minimum fee median FM = max(300kB, long_term_median). Now it will only gradually change as the long term median changes.

Of course, we must also change the penalty median since the fee is intended to afford one reference transaction's worth of penalty cost. Use the long term median ML = max(300kB, median_100000_long_term_weights) for penalty calculations instead of the cumulative median. But wait, this would seem to retract the upgrade for surge environments (we want some short term flexibility in block weights). The key insight here is making the 'penalty free zone' scale with the short term median, and the 'penalty zone' scale with long term median. Let's look at the penalty calculation (as it is done now; CM is the cumulative median CM = max(300kB, min(median_100_blocks_weights, 50*ML))).

P = B*(block_weight/CM - 1)^2

Note that the penalty only activates when block_weight > CM. We can reorganize it:

P = B*([block_weight - CM]/CM)^2

We want the 'activation' to happen when block weights are larger than the cumulative median, while the penalty itself scales with the long term median.

P = B*([block_weight - CM]/ML)^2

And the maximum block weight is MAX = CM + ML (instead of MAX = 2*CM which we have now).

The penalty to add one transaction above the cumulative median will slowly change as the long term median changes, but the penalty free zone is still free to fluctuate relatively rapidly. This also greatly increases the cost to set up a spam attack, as attackers will pay a fixed rate per byte to increase the short term median, rather than a fixed rate per percentage of the penalty zone (which becomes equivalent to more and more bytes as the short term median rises, with our current algorithms). It also ingrains this anti-spam mechanism directly into the penalty calculation, rather than the fee algorithm which is technically optional for nodes (it is network consensus, not protocol consensus, to enforce the minimum fee).

Changing long term median to long term average

The long term median is meant to reflect a large volume of transaction history, yet it can also change quite abruptly if short term volume shocks turn into long term volume shocks. Suppose transaction volume is chugging along at a high rate for quite a while, then after our catastrophe it halves semi-permanently. 35 days later the long term median will suffer a rapid adjustment, which seems to defy its supposed 'long term' nature. A similar thing could happen in the opposite direction with a sudden 40% rise in transaction volume. By making it a long term average, adjustments to changing overall transaction volume would be 'priced in' much more gradually.

It makes sense to keep our short term median, a median, since it works quite well for ignoring outliers. There are no meaningful outliers over 100000 blocks.

Reducing granularity: the construction-publish time delta heuristic

Transaction authors always base their tx fees off the current blockchain state, yet it can take some time before the tx actually gets mined into a block. Observers can compare the fee included, to the expected fee for a tx's resident block, to estimate the time delta (image credit: @Isthmus) between tx creation and being mined. That heuristic may be useful to fingerprint different kinds of users. Note that this heuristic also applies to age of the youngest ring member, so rounding fees like this doesn't completely solve the problem.

We can do simple rounding on the minimum fee, which already changes only slowly over the course of a single day (assuming my updated fee using long term average), to reduce the granularity of the time delta heuristic. Fewer significant digits equates to less granularity.

Suppose we want to round up to N significant digits, F is the explicitly calculated fee, and we are doing integer arithmetic. Find the order of magnitude of F (i.e. number of decimal digits), N_f, then calculate

throw_away = F % 10^(N_f - N);

if (throw_away > 0) then
{
F -= throw_away;
F += 10^(N_f - N);	
}
@UkoeHB
Copy link
Author

UkoeHB commented Feb 16, 2020

Also, our fee algorithm sort of assumes transactions are added to the blockchain soon after being constructed. Multisig requires multiple rounds of communication, so many multisig tx are doomed to be mined into a block a while after being initiated. This is especially the case for an escrowed market, where for the sake of UX partial tx need to be created at fulfillment, and only signed after delivery, which may take weeks. In a healthy economy a large part of tx volume would be marketplace oriented.

@UkoeHB
Copy link
Author

UkoeHB commented Feb 21, 2020

I don't believe the solution I offered is entirely adequate in the case of a surge environment. Block weights need to rapidly adjust when transaction volume suddenly changes, which means pushing up the penalty zone shouldn't be linear, but rather accelerate given constant pressure.

There are three important block weight environments to consider: sudden surge of honest transaction volume, sudden collapse in volume, and an attacker who wants to bloat the chain. I assume we will keep using the long term median (or average) and short term median as basic building blocks, since they seem quite useful for capturing different kinds of volume changes.

My solution is at the bottom :) @ArticMine

Environment 1: honest volume rises rapidly

Transaction volume is going at a steady pace for most of the year (short term median MS = long term median ML), then it's the holidays and many more tx are available than can fit in a block. MS rises, and it needs to rise quickly to fit in all these tx. However if allowed to rise exponentially for too long, then it will simply explode out of control. Our current cumulative median (CM) is well designed for this scenario, and penalties are directly calculated from it. Usually the penalty median is just MS, except it won't go higher than 50*ML. In other words, the penalty zone scales directly from recent block weights.

CM = min(MS, 50*ML)
P = B*([block_weight - CM]/CM)^2

Environment 2: honest volume falls rapidly

A world-shaking calamity knocks off a ton of tx volume. MS falls, while ML is still quite high. In this case our penalty zone would shrink by a lot, and while the fee required to add a reference transaction equal in size to 1% of the penalty zone will stay the same, that 1% will be much smaller so a normal transaction will be a larger proportion of that 1%, and hence fees will rise. On the surface this doesn't seem like a problem since transactions can just use the new fee levels. However, any transaction constructed BEFORE MS fell will be stranded since they have lower fees from the earlier fee levels.

Clearly this is the problem to solve here, but I'm including all the other points for context. We don't want to break anything.

Environment 3: blockchain spammer

Since we do have a dynamic block weight, obviously it's impossible to prevent someone from pretending to be honest and push up the medians (or a benign person from carelessly making transactions willy nilly). The solution is making that kind of behavior more expensive. Our primary mechanism for this is preventing the fee from going lower when the short term median rises up. In other words, basing the minimum fee on long term median so an attacker has to shoulder all the cost of both raising block weights, and then holding them above that long term median.

smallest_median = min(MS, ML)
F = B*(1/smallest_median)*0.002

In our current situation an attacker can spend just enough in fees for the short term median to hit 50 x long-term-median. With current (as of this writing) block rewards at 2 XMR, an optimized attacker can increase the short term median by 17% every 50 blocks, and reach the upper bound after about 1300 blocks (about 43 hours), spending 0.39*2 XMR per block, for a total setup cost of about 1000 XMR (or around 90k USD at current valuations), and then go back to the minimum fee. When the fee median equals the penalty free zone, then the minimum total fee to fill the penalty free zone is 0.004 XMR (about 0.26 USD at current valuations). If the fee median equals the long term median, it would in the spam scenario be 1/50th the penalty free zone. Therefore it would just be 50x the short-median case, for 0.2 XMR per block (13 USD per block). This comes out to 2.88 XMR per day (when long term = short term) vs 144 XMR per day (for 69 days, until the long term median rises by 40%) to maintain every block with 50 x long-term-median block weight. This will reduce to 300 XMR setup, and 43 XMR daily maintenance, at the emission tail. My suggestions here will not change this analysis.

New fee and penalty calculations (OUTDATED)

The fee should always be enough to pay for adding a 'reference' transaction to the penalty zone (e.g. a tx equal in size to 1% of the penalty zone). This prerequisite follows from JollyMort's analysis. We also want to maintain accelarating block weights in response to surge, and the cost of a spammer holding block weights above ML.

The penalty zone should rise above ML in surge, but not fall below ML in collapse. However, the penalty free zone MAY fall with MS, so there is always a cost to short-term fluctuations in block weight. And, fees should always depend on ML so lifting weights above it is a burden for spammers and so they are unaffected by short term events.

MS = short term median of normal block weights (100 blocks)
ML = long term median of long term weights (100000 blocks)

CM = min(max(MS,ML),50*ML)
Penalty free zone = MS
Max block weight = MS + CM
Penalty (P) = B*([block_weight - MS]/CM)^2

Minimum Fee per byte (F) = B*(0.002/ML)

CM, MS, and ML may never go below the default penalty free zone of 300kB.

Note: my ideas for long term average and reduced granularity still stand.

EDIT1: fixed some typos

@Gingeropolous
Copy link

I would recommend going the same route Jolly Mort did - make a simulation. There are some that can keep everything about a dynamical system in their head - but I am not one of them, and thus I assume there are others. Or I could just be tired and want to stare at pretty math pictures.

For instance, in your first scenario above, they way that I understand the extant algorithm indicates that it would kinda resolve itself. Your saying we're at a steady state of 30 MB blocks. So that means that the long term and short term medians are 30 MB. Once the event occurs, the number of transactions lowers to 5 MB a block. Until the short term median block triggers (lets call it 720 blocks), min transaction fee would be the same. On rollover of that median, .... hrmm, yeah. I guess there would be a spike in min fee all of a sudden.

I remember thinking about this too. I think I called it the "soft tail".

However, lets stick with the extant system... so now there's an increase in min fee. It costs more to send money and to use the chain. Why? Well, there's less demand. So now either the blockchain stays at 5 MB a block and people continue paying the high fees, or it lowers even more to 3 MB because people are being more conservative with their monero.

Whats this do. Well, less money velocity does what to an economy. If using money is expensive, I'll do it less. So there's less demand for goods, so the prices of goods start falling. With the prices of goods falling, people are selling less goods, so production decreases. With production decreasing, there is less work to be done. With less work to be done, there are less jobs.

However, how is the network doing? The "event", in addition to disrupting the economy, disrupted the network. Major interlinks gone. Network is fragile.

I hate to say it but this might be why its designed this way. Monero is a network, and if its backbone is damaged, it would need to scale to the functioning part of the backbone. A backbone with 30% of its capacity attempting to process 100% of where it was at steady state before the event with be fragile and insecure. Probably massive /constant reorgs as miners leverage their newfound latency for selfish mining. Though, if thats the only thing, maybe its not so bad. Perhaps there are others, I dunno.

And how would it all wrap together? You've got an event, its wiped out economic activity, so its probably wiped out a lot more. Presumably there is a desire to rebuilt. How does lower velocity money affect a recovery? But, the miners are making more. And who are the miners? The people.

So the people are getting more monero to rebuild after the event, in combination with some lack of demand due to increased cost of money.

Someone call an economist!

Hrmmmmm.

@UkoeHB
Copy link
Author

UkoeHB commented Feb 21, 2020

The problem Im describing doesn't have to be calamitous, I just say that to really emphasize what's going on. Minor falls in tx volume of 5% could have the same effect on anyone using the minimum fee (note there is a 2% leeway around the minimum fee to account for integer rollover). Note the problem is transactions getting stranded because they offer fees that used to be enough, but are now too little after tx volume fell. Rather than suggest people not use the minimum fee (in which case is it really the minimum?), it seems better to make it more reliable.

The long term average can fall by no more than ~1/3 over the course of a month even if tx volume goes to zero. This means default fee priority will always be enough for tx that don't get added to blocks for quite a long period of time after being constructed, even when volume changes are absurd.

The economic consequences of calamitous events are not anything we can really address. Also, the velocity of money seems like a red herring except as an esoteric curiosity, but this isn't really the place for that argument :p.

@Gingeropolous
Copy link

Also, the velocity of money seems like a red herring except as an esoteric curiosity, but this isn't really the place for that argument :p.

IMO, ignoring meat world realities is a dangerous slope. The protocol is designed to respond to meatworld in such a way to keep the network / blockchain stable.

. Note the problem is transactions getting stranded because they offer fees that used to be enough, but are now too little after tx volume fell.

Indeed. However, these transactions have already been broadcast and are in the txpool of the miner. Thus, it is the miners prerogative to include them in a block if the opportunity exists. I.e., any chance they have of nabbing that fee they will take. The stranded forever scenario assumes that blocks are entirely full after the event, afaiui.

@UkoeHB
Copy link
Author

UkoeHB commented Feb 21, 2020

However, these transactions have already been broadcast and are in the txpool of the miner. Thus, it is the miners prerogative to include them in a block if the opportunity exists

Another way to think about it is long-term multisig transactions that start getting constructed weeks before they are signed (e.g. in an online marketplace this would be commonplace). If the minimum fee changes too much by the time they get submitted, then the transaction will just get rejected.

Going back to the discussion around the extra field, which has a similar 'it's up to the implementer' atmosphere, I do feel it might be worthwhile to consider enforcing the minimum fee at the protocol level. It's a controversial idea though.

@UkoeHB
Copy link
Author

UkoeHB commented Feb 24, 2020

The previous solution had a slight mistake. Using MS directly leaves room for the big bang attack. Instead, just use the current CM.

MS = short term median of normal block weights (100 blocks)
ML = long term median of long term weights (100000 blocks)

CM_free (current CM) = cumulative median for penalty free zone = min(MS,50*ML)
CM_penalty = cumulative median for penalty zone = max(CM_free, ML)
Penalty free zone = CM_free
Max block weight = CM_free + CM_penalty
Penalty (P) = B*([block_weight - CM_free]/CM_penalty)^2

Minimum Fee per byte (F) = B*(0.002/ML)

CM_free, CM_penalty, MS, and ML may never go below the default penalty free zone of 300kB.

Note: my ideas for long term average and reduced granularity still stand (see original post up above).

ArticMine added a commit to ArticMine/Monero-Documents that referenced this issue Aug 5, 2020
@ArticMine
Copy link

First I confirm the validity and seriousness of this issue. A the most basic level, an external event can cause the short term median to fall after 51 blocks to the level of the penalty free zone currently at 300,000 bytes. Once this happens fees will rise very sharply. For example if before the event both the long term and short term medians were at say 30,000,000 bytes fees will rise by 100x, If both the long tern and short term medians were at 300,000,000 bytes then fees will rise by 1000x etc. Furthermore after 50,001 blocks the long term median could also fall to the level of the penalty free zone currently at 300,000 bytes. With the current maximum scaling of the long term medium (1.4x) a recovery of the long term from 300,000 bytes to 300,000,000 bytes would take 21 cycles of the long term median (1.4^21) or over 1 million blocks at a minimum. The corresponds to close to 4 years under very unrealistic ideal circumstances. Furthermore there is also a related issue. The current rate of growth of the long term median is very close to being maxed out or even exceeded by the recent growth in Monero transactions. Consider for example the period from March 2019 to March 2020, https://bitinfocharts.com/comparison/monero-transactions.html#log, and compare this to the maximum annual growth in the long term median of just over 5x. This can leave very little room for a recovery in the long term median from an external event. This scenario can actually be considered a Big Bang Attack in reverse. The current COVID-19 pandemic could very well have triggered such an external event were it not for the fact that Monero’s transaction rate is producing blocks below the penalty free zone of 300,000 bytes.

In 2019 the long term median was introduced in Monero in order to mitigate against a big bang attack. In a similar fashion the long term median can be used to address this issue, Furthermore we can adjust the long term median to address the recent rate of growth in Monero transactions and allow for a recovery after an external event. The proposed changes can be summarized as follows:

  1. Consensus Changes:

a) Increase the rate of growth for the long term median from 1.4x to 2x.
b) Limit the rate of decay for the long term median 0.5x
c) Set the penalty free zone to the maximum of the minimum penalty free zone (currently at 300,000 bytes) and 0.25x the long term median. This effectively places a floor under the short term median.

  1. Minimum Fee For Node Relay Changes:

a) Base the minimum fee for node relay on the actual reference transaction size and add the safety margin of 10% over the Jolly Mort simulation to the wallet fees, The lowest wallet fee, equivalent to the current normal fee, would be 10% above the minimum node relay fee. This is based upon @UkoeHB’s recommendation.
b) I have also included an option for CLSAG where the reference transaction size is 2700 bytes, both as a close comparison to the current situation and also for the case where it is desired to increase the mixin above 11 as opposed to lower the fee per byte after CLSAG.

  1. Wallet Fee Changes:

a) The 10 empty block adjustment for calculating wallet fees is now applied to both the long term and short term medians
b) The normal fee is now lowest fee. It protects against a change in the medians for 10 blocks
c) The low fee is 4x the normal fee and protects against a change in the medians, in addition to the normal fee protection, for the balance of the long term median
d) The medium fee is 4x the low fee and protects against a change in the medians, in addition to the low fee protection, for an additional 100,000 blocks
e) The high fee is at least 4x the medium fee and protects against a change in the medians, in addition to the medium fee protection, for an additional 100,000 blocks. This 200,000 blocks over the low fee. It also provides as a minimum the maximum penalty rate fee.

One further consideration is that to ensure that transactions that pay the high fee meet the minimum node relay fee for 200,000 blocks it would also be necessary to grandparent transactions during hard forks for at least an equivalent number of blocks. This would be needed where a hard fork changes the validity of transactions.

The specification document link is: https://github.com/ArticMine/Monero-Documents/blob/master/MoneroScaling2020.pdf

@UkoeHB
Copy link
Author

UkoeHB commented Aug 5, 2020

Hi @ArticMine, thanks for your work!

Comments on the specification document (PART 1; saving part 2 for later).

  1. ML: this is the long term median, i.e. the median of 100k blocks' long term weights. It's not clear what you are proposing. Are you suggesting to change the long term block weight, or just how the effective long term median is computed?
//long term block weight change?
LBW = max(min(BW, 2*ML), ML/2)
//effective long term median change (keeping long term block weight as it is currently)?
ML = max(100000_LBW_median, ZM, ML/2)

If the later is true, then I think it would have no impact at all. After 10 blocks your ML would be ML_original*(1/2)^10. If the former is true, then I agree it should limit long term median change to [+2x, -0.5x] over 50k block intervals. It's an important innovation that lessens the chances of the long term median getting trapped.

  1. MS: there seems to be a mixup between the 'short term median' and the 'effective short term median'
//current 'effective short term median'
MS_effective = max(ZM, 100_BW_median)
//what you described
MS_effective = 100block_median{max(ZM, BW)}

While those equations are functionally equivalent, it makes me uncertain about the proposed changes. Specifically for the 'cumulative weights median' which you called MN (penalty calculation term).

//current 'cumulative weights median' aka MN
MN = max(ZM, min(MS_effective, 50*ML))
//what I estimate your proposal says
MN = 100block_median{max(max(ZM, ML/4), min(MB, 50*ML))}

Are you saying to compute MN recursively? In other words, store a 'short term block weight' like SBW = max(max(ZM, ML/4), min(MB, 50*ML)) and then MN = 100block_median{SBW}? Or should it be more like this:

MN = max(ZM, ML/4, min(100_BW_median, 50*ML))

I am tentatively on board with this. My only concern is it makes room for some transaction spam in a collapse scenario. If volume goes below ML/4 then you effectively have a higher 'default penalty free zone' that attackers can spam in. That's the main reason I suggested splitting up the zones, allowing the penalty free zone to more fluidly fluctuate. It would force attackers to buy space beyond the natural transaction volume.

@UkoeHB
Copy link
Author

UkoeHB commented Aug 6, 2020

Comments part 2
3. Minimum fee; Originally the minimum fee was 1/5th the fee required to make the reference transaction, and in your specification it is instead equal to the reference tx fee. There are two sides to this change. It ensures the block size will automatically rise merely from an abundance of transactions, rather than a systemic change in how tx are constructed (e.g. by all/most participants intentionally choosing a higher-than-minimum fee). It also increases the cost of maintaining a spam attack by 5x. However, it sort of disallows users from choosing 'very low priority' for their transactions.

Currently using the minimum fee is like saying "if the blocks get full, I don't care if my tx doesn't get mined; my priority is a tiny fee". I feel uneasy about increasing the cost to users in the name of denying their motives. To be clear, saying tx fees must be high enough to push up block weights is a rejection of users who don't care about pushing up block weights. It's not even a subsidy to high-priority transactions which will get mined either way, so the benefits are really unclear to me.

  1. Minimum fee again; The fee median is currently MF = min(MN, ML). You recommend changing MN (cumulative median) so it won't go lower than ML/4. In a collapse scenario, therefore, the minimum fee can be increased by up to 4x (from ML -> ML/4, and the minimum fee scales with 1/MF). Actually, it could be increased by up to 8x if the long term median suddenly drops by half at the same time (a much more unlikely situation, but still a real one).

This seems to only partially address the original problem of transactions getting stranded in a volume collapse situation. After all, transactions with the minimum fee will still get stranded. Letting minimum fee transactions get stranded means using a minimum fee is like signing up to the idea that 'if lots of blocks are empty, then our transaction won't get mined'! A complete reversal of the expectation of minimum fee users, who are looking for empty blocks to put their transactions in.

Taking for granted the updates already recommended, what could we do to fix the issue of min fee tx getting stranded?

Attempt 1: Fix the fee median on the long term median.

MF = max(ZM, ML)

Benefit: No matter what the short term median does, the minimum fee will remain consistent.
Note: When the short term median rises, all fee levels will effectively obtain a higher priority, which is safe for users.
Problem: When the cumulative median falls, the cost for a reference transaction will increase. However, the min fee, which was calibrated for the steady state when MN == ML, won't also increase. This means fees will effectively obtain a lower than default priority just from the cumulative median falling. Falling priority is a potential death trap that fee design has always aimed to avoid.

Attempt 2: Fix the fee median on the lower bound long term median.

MF = max(ZM, ML/4)

Benefit: No matter what the short term median does, the minimum fee will remain consistent.
Note: When the short term median rises, all fee levels will effectively obtain a higher priority, which is safe for users.
Problem: Fees and priority will be calibrated for a situation that is likely to never happen, which just doesn't make sense since the 'default' should be the 'typical'.

Callback: To get a clean solution for the minimum fee I originally recommended splitting up the penalty and penalty free zones, and to use a long term average instead of long term median.

Split up the zones, fix the fee to the long term median (or average):

CM_free (current CM) = cumulative median for penalty free zone = min(100block_median, 50*ML)
CM_penalty = cumulative median for penalty zone = max(CM_free, ML)
MF (fee median) = max(ZM, ML)

Use a long term average instead of long term median. That way ML has no way to abruptly change. To limit growth/decay of the long term average to +2x, -.5x over 50k blocks, change the long term block weight and average like this:

LBW (long term block weight) = max(min(BW, 1.45*ML), 0.65*ML)
EAL (effective long term average) = max(ZM, 50000_average{LBW})

A long term average controlled like this can only drop 0.5% over the course of a day, or 4% over the course of a week, or 16% over a month.

  1. Wallet fees: no comment here, wallet fees depend on the minimum fee and penalty calculations.

@ArticMine
Copy link

Hi @UkoeHB Thanks so much for your comments and feedback.

If the former is true, then I agree it should limit long term median change to [+2x, -0.5x] over 50k block intervals. It's an important innovation that lessens the chances of the long term median getting trapped.

It is the former, but with ZM also as a floor since ML/2 can fall below ZM: LBW = max(min(BW, 2*ML), ML/2, ZM)

Or should it be more like this: MN = max(ZM, ML/4, min(100_BW_median, 50*ML))

Yes, In summary ML is calculated recursively in terms of ML and MN = MS is not. The calculation for MS is performed using the final recursive value for ML. Clarifying this is very important.

I am tentatively on board with this. My only concern is it makes room for some transaction spam in a collapse scenario. If volume goes below ML/4 then you effectively have a higher 'default penalty free zone' that attackers can spam in. That's the main reason I suggested splitting up the zones, allowing the penalty free zone to more fluidly fluctuate. It would force attackers to buy space beyond the natural transaction volume.

Splitting the penalty zone into MS < ML and MS is between ML and 50*ML zones is actually very insightful since ML is a critical point. I am going to analyze further the spam risk of the case ZE = ML (ML becomes a floor for MS). This covers both your suggestion of a different penalty formula for the below ML zone, and my previous proposal. The other aspect I will be considering the complexity of the solution. A simple solution that does the job is by far preferable to a more complex one. If the spam risk for ZE = ML can be mitigated then I will be modifying my previous proposal to ZE = ML.

Minimum fee; Originally the minimum fee was 1/5th the fee required to make the reference transaction, and in your specification it is instead equal to the reference tx fee. There are two sides to this change. It ensures the block size will automatically rise merely from an abundance of transactions, rather than a systemic change in how tx are constructed (e.g. by all/most participants intentionally choosing a higher-than-minimum fee). It also increases the cost of maintaining a spam attack by 5x. However, it sort of disallows users from choosing 'very low priority' for their transactions.

This a a valid point. One thing to keep in mind is that in the current implementation once MS is over 5x ZM then the minimum fee also does make the reference transaction, but at a much reduced rate of growth. I did also look at the idea of having the minimum fee fall with MF^2 as opposed to MF. I did find significant risk with the MF^2 approach not only with spam but also with the minimum fee falling below the typical variance in MS. This being said the minimum node relay fee does have a very significant impact on the spam risk, and consequently will need to be considered as part of the spam risk analysis.

@ArticMine
Copy link

ArticMine commented Jan 27, 2021

The proposed specification Document is https://github.com/ArticMine/Monero-Documents/blob/master/MoneroScaling2021.pdf

Summary of the Proposal

  1. Consensus Changes:

a) Increase the rate of growth for the long term median from 1.4x to 2x.
b) Limit the rate of decay for the long term median 0.5x
c) Set the penalty free zone to the long term median making it dynamic. This effectively places a floor under the short term median

  1. Minimum Fee For Node Relay Changes and Wallet Fees

a) Base the minimum fee for node relay on the actual reference transaction size and add the safety margin of 5% over the Jolly Mort simulation to the wallet fees, Scale the minimum fee with the inverse square of the long term median.
b) Scale the low and normal fees with the inverse square of the long term median and the medium and high fees with the inverse of the long term median. All fees including the minimum node relay fee with the exception of the max penalty portion of the high fee only depend on the long term median. The 4x factor in fees allows for a transaction that pays the normal fee to meet the minimum fee for a period at least equal to the long term median, with further extensions in time for the medium and high fees.

  1. Reference Transaction, Minimum Penalty Zone and Median Calculation After a Fork

This is included to provide an indication of the possible issues that can arise with a fork that takes place after Monero is out of the minimum penalty free zone. If the upcoming fork takes place while Monero is in the minimum penalty free zone and the ring size is below 26 I do not see any issue here.

  1. Wallet Fee Rounding

The key point here is to always round up and never down.

Discussion:

With the perspective of close to a year since this issue was raised I will first consider that two most significant events.

a) The COVID-19 Pandemic

The COVID -19 pandemic has actually had relatively little impact on the Monero network overall. One can surmise that this is due to the preponderance of Monero economic activity being on-line as opposed to in person “bricks and mortar”. The latter being the most impacted by COVID-19, nevertheless it is important to recognize that had there been a a significant “bricks and mortar” component to the Monero economy the impact would have been profound and COVID-19 could have very easily triggered the problems with fees raised by this issue. It is also important to recognize that during this pandemic we saw an increase in Monero transactions from the low in February to the high in December of ~5.6x. https://bitinfocharts.com/comparison/monero-transactions.html#log, For the long term median scaling at the rate of 1.4x this would require at least 6 integrations or 300,000 blocks. The would require a period of over 400 days minimum. The fact the the long term median would not have been able to keep up with this change further supports the need to increase the maximum scaling rate for the long term median from 1.4 to 2

b) The Recent Network Attacks.

The scenario proposed in this issue proposed external events triggering the drop in the transaction rate. What the recent attacks have demonstrated is that a relatively short term malicious attack against the nodes can trigger this issue and cause very substantial damage to the Monero network for a very small cost. In order to cause the short term median to fall, triggering a potentially major increase in fees the attack only needs to be sustained for a period of 50 blocks or about 100 min. The only reason the recent attacks did not create a major increase in fees is because the Monero network was well within the penalty free zone, so a drop of 65% in network activity, which actually occurred, had no impact on fees. The second element here was the time frame of the Monero community response, which was of the order of a couple of weeks and well below the length of the long term median. Stabilizing fees to the long term median is a robust way of preventing such an attack causing a sharp increase in fees.

The next matter I will consider is spam analysis. I will consider the following:

a) An Ongoing Big Bang or Flood XMR attack

In 2019 the long term median was introduced in Monero in order to mitigate against such an attack. The key impact here is that the attacker has to maintain the short term median at the elevated level for at least 50000 blocks, or about 69 days, in order to cause the long term median to move.. Once the long term median has moved the attacker in theory would then proceed to further increase in the short term median and continue the attack. The scenario of the short term median being below the long term median and the presence or lack thereof of a penalty, in this scenario, has no impact on such an ongoing attack. The long term median deters this attack in two ways. The first and most obvious is that the attacker has to pay at least the minimum node relay fee during this 69 day period, in order to maintain the inflated short term median. The second and potentially more significant is that the community now has a significant period of time to respond to the attack. The latter in itself is a significant deterrent, since the attacker has not way of knowing in advance what the Monero community might do in response. Then there is the deterrence of simply wearing the attacker out by forcing the attacker to drag out the attack over months or even years.

b) The minimum node relay fee

The minimum node relay fee was first established in Monero in 2014. The key principle here is that the nodes will not relay a transaction with a fee that is so low that the transaction is unlikely to be mined. An attacker could otherwise launch a denial of service attack against the nodes at no cost by flooding the network with transactions with fees so low that the miners will not mine them. Since the transactions are not mined there is no cost to the attacker. This type of attack does in fact happen on the Bitcoin network, since because of the fixed blocksize it can be cost effective for a miner to flood the Bitcoin network with transactions in order to cause Bitcoin fees to rise. If one sets the fee too low, and the transactions are not economic to mine then the above attack will still occur. If on the other hand the fee is set to high then it can create perverse incentives where the miners will be motivated to accept transactions directly say via a website and in the process can capture personal information and destroy Monero’s privacy. The most prudent level for the node relay fee is the minimum fee that can be economic to mine at a given long term median and still scale the Monero network. In the proposal this means that the minimum fee scales as the inverse square of the long term median. This of course raises the critical question of the cost in real terms of the minimum fee and the amount of overall spam deterrence it provides. To understand this we need to answer the following question: If the Monero network activity in increases by N and the market capitalization of Monero increase by N^y, then what is a reasonable estimate for y? Metcalfe's law., https://en.wikipedia.org/wiki/Metcalfe%27s_law, proposes y=2 for small networks and then a value approximately between 1 and 2 as networks become mature. One can get an estimate of y from historical data for various crypto currencies over a 5 year period. The figures I obtained for my talk for Greyhat in October 2020 were as follows:

For Monero (XMR), Bitcoin(BTC), Litecoin (LTC), Dogecoin (DOGE) and Dash(DASH) I used the 90 day average ending on October 27, 2015 and October 27, 2020. For Bitcoin also the period ending on October 27, 2010 to October 27, 2015. For the Data source I used https://coinmetrics.io

The figures were as follows: 5 year period ending on October 27, 2020

Monero: y = 1.59
Litecoin: y = 0.89
Dash: y = 1.06
Dogecoin y = 2.30
Bitcoin: y = 4.20

The figures were as follows: 5 year period ending on October 27, 2015

Bitcoin: y = 1.28

I would consider the 2015 -2020 figure for Bitcoin to be an outlier since that network is artificially constrained by its fixed blocksize. (weight), while the other networks and Bitcoin before 2015 were not. Choosing a different time frame will produce different results. The main take away is that values between 1 and 2 for y are reasonable over time. The spam cost will increase with N for y > 1 and remain constant for y =1. For y < 2 the ham cost will drop with N, If over time Monero can maintain a value of y > 1, then the cost to spam the Monero network will continue to rise with N. There is of course an important safety built into the proposal in that until the Monero network reaches a long term median 1425000 bytes or approximately 5 tx / sec, the proposed minimum node relay fee will be higher than under the current formula. The advantage is that unlike the current formula scaling will be possible at the minimum node relay fee. Furthermore this would avoid the situation where the low fee is too low and the normal fee too high for scaling. This would be the situation with the current formula. One must remember that fees, including the minimum node relay fee are not consensus and can always be changed if spam was still a concern with the long term median above 1425000 bytes and in addition Monero were to under perform in the market relative to its actual use like Litecoin did from October 2015 to October 2020.

c) The Current Situation

Monero has for the most part of its history functioned very well deep within the penalty free zone. The key difference here is that the new dynamic penalty free zone must first be vetted by the market by increasing the long term median. This is unlike the current situation where the penalty free zone was a set parameter. This also indicates that the spam risk in the situation where the actual block weight were to fall below the long term median is lower than the historical situation in Monero.

The next item is the principle of simplicity in code.

The proposal creates a much simpler solution than the previously discussed solutions of either creating a weakened penalty or setting the dynamic penalty free zone to only 25% of the long term median. It is also much simpler than the current situation of applying the full penalty when the short term median is below the long term median. This simplification is very evident in the calculation of wallet fees where the fee median is now equal to the long term median. A bug that in the fee median that was addressed after the implementation of the long term median would now become moot. The principle is that simpler code is less prone to errors and has less of an attack surface.

I will recommend that we proceed with these changes before it is likely that Monero moves into the adaptive blockweight. This is to remove the motivation of the deliberate triggering of this issue, by an attacker as a way of disrupting the Monero network by causing a sharp increase in fees. A sharp move upward in the price of Monero, such as what occurred in 2016 could easily alone cause the increase in activity on the Monero blockchain to push Monero into the adaptive blockweight, Other factors that could cause increased blockchain activity include continued increase in retail adoption, and regulatory changes such as the ones proposed in the United Sates.

@garlicgambit
Copy link

Is this issue more or less pressing than the ringsize increase and bulletproofs+ upgrade? Suppose that there will be no ringsize increase and no bulletproofs+ upgrade for whatever reason. Would this issue on itself warrant a hardfork within a 12 month period?

@SamsungGalaxyPlayer
Copy link

@ArticMine the increase of the cap from 1.4 to 2 seems relatively arbitrary to me.

Since the start of 2021 to today (according to CoinMetrics), Monero has about 6% the number of transactions as Bitcoin. This means that with a value of 2, Monero could scale to have twice as many transactions as Bitcoin in one year, and continue the growth rate at 32x annually after that. That seems excessively accommodating to me.

At 1.4, we allow for a 5x increase per year, which would carry Monero from its current ratio with Bitcoin to about 1/3.

I'm okay increasing this number up to 1.7. This gets us roughly to Bitcoin's transaction count in one year if there is enough demand for it, with the capacity to keep scaling if necessary. At a maximum rate of a still-astonishing 14x/year, which already seems unsustainable to me. If we set this threshold this high, my understanding is that we would need to adjust it downward again with consensus if Monero grows to be a large network.

I also want to note that hitting this limit will not cause the Monero network to "break." It means that block size will be at a premium, and fees will be higher at that point in time until the huge max growth rate we have allowed for catches up. I think that is okay.

@UkoeHB
Copy link
Author

UkoeHB commented Mar 15, 2021

@ArticMine thank you for your work

Re: Proposed Scaling Definitions (January 2021)

EDITED

  1. This line threw me off quite a bit M_L ; Penalty free zone. This is now dynamic. It looks like in fact the penalty free zone and penalty zone are the same, which seems acceptable to me.

However, I still believe a long term average instead of median would be better. If a shock event causes transaction volume to fall, then 69 days later the long term median will abruptly plummet. Tx made with low fees before that drop-off will get stranded. It means tx authors have to be aware that minimum tx fees will rise, which is sub-optimal from a UX perspective (compared to tx authors being more-or-less oblivious to the state of the network).

  1. Minimum fee changes look fine to me.

  2. It is not clear what the 10 block buffer for wallet fees is for.

  3. I recommend a default policy of two significant digits for fee rounding.

Overall I believe the proposed changes resolve the concerns raised by this issue.

@ArticMine
Copy link

ArticMine commented May 25, 2021

First many thanks to @UkoeHB for your work in reviewing this proposal. Also many thanks to all those who commented and made suggestions.

I updated the definition to clarify the calculation of the wallet fees. There is no need for a second recursion calculation for the wallet fee medians. https://github.com/ArticMine/Monero-Documents/blob/master/MoneroScaling2021-02.pdf

The 10 block buffer is designed to address a change in fees between the time a transaction, for immediate broadcast is, assembled, broadcast and relayed.

With respect to the cap I considered 1.6 / 0.625, 2 / 0.5 and 2.5 / 0.4. I considered transaction rate changes not only in Monero but also Bitcoin (before the limit was reached), Litecoin, Dogecoin, Dash, and Bitcoin Cash. Here is a chart for Monero, Bitcoin and Litecoin. https://bitinfocharts.com/comparison/transactions-btc-ltc-xmr.html#log. Increases of 8x over a six month period and even 80x over a 1 year period have occurred in the past and can occur in the future. This is especially the case if there is a disruption in a competing coin as occurred in 2017 with Bitcoin reaching a hard limit triggering an 80x increase in Litecoin transactions. Setting this cap below 2 does run the real risk of triggering this issue if one allows the short term median to remain above the long term median for many months. On the other hand increasing the limit to say 2.5 / 0.4 would lead to a much larger spread in fees. There is also little evidence for uncontrolled growth over multiple years even in chains with no penalty to increase the blocksize below the hard cap such as Litecoin. One can take a look at Litecoin transactions in 2018 or Monero transactions in early 2017 for example. Also it is best to use figures where the reciprocal can be expressed as an exact decimal. 2 / 0.5 vs say 1.7 / 0.58823529... .

@UkoeHB
Copy link
Author

UkoeHB commented Jan 29, 2022

At the dev meeting today, @carrington1859 brought up his concern that @SamsungGalaxyPlayer 's comment has not been adequately addressed. I also have some concerns about scaling.

To summarize:

  1. Spam cost: If the dynamic block size can expand at a fast rate, then it could theoretically balloon to massive levels due to a spam attack. Higher rates of expansion mean lower cost to the spammer (and more rapid results). @ArticMine 's fee changes would increase the maximum rate of long-term-median growth from 1.4x over 69 days to 2x, which is a maximal change of 5x vs 32x over 1 year. Note that the short-term median can rise to 50x over the long-term median, which means the absolute upper limit on block size after 1 year of maximal long-term growth would be 250x vs 1600x.
    a. If we start at 300kB blocks, this means 1.5MB vs 9.6MB long-term median-sized blocks (1k vs 6.5k tx per block [for ~1.4kB 2-in/2-out tx], or 8.3 vs 54.2 tx/sec), and 75MB vs 480MB maximal short-term-median blocks (50k vs 325k tx per block, or 415 vs 2710 tx/sec).
    b. TODO: what is the actual cost of a spam attack with these numbers?
  2. Network stability: Nodes are resource-constrained, they have bandwidth limits (kB/sec) and CPU-time limits (tx verified per second).
    a. If a node can't download or verify new blocks as fast as they appear, then they will fall behind the network. Users would only be able to rely on fast nodes in order to get a fresh view of the blockchain.
    b. A related problem is block-verification-times will start to factor into mining profitability. It would give mining pools a significant advantage over solo-miners.
    c. If a node wants to re-validate the blockchain, but their computer isn't able to catch up to the top of the chain within a reasonable time-span (because the chain tip is moving too fast relative to their verification speed), then it is basically infeasible to re-validate the chain for many users. That point will be reached eventually regardless of chain tip speed, because new blocks always increase the static cost of re-validation. However, a fast moving chain tip makes the problem even worse (and makes it occur far sooner).
    d. According to my performance research into Seraphis, one thread on a high-end CPU can verify about 167 CLSAG 2-in/2-out 11-ring-size txs per second. A mid-range CPU with a busy thread scheduler can probably do half that, at best. Moving to 16-ring-size CLSAG or 128-ring-size Seraphis would further reduce that number by 30-50%.

@UkoeHB
Copy link
Author

UkoeHB commented Feb 2, 2022

@ArticMine his table has these lines:

Time to verify a block single-thread, batch size of 25 (s) | 336s | 931s
Time to verify a block 8 threads, batch size of 25 (s) | 42.0 | 116.4

Pretty sure these times are 'avg time per block if blocks are batch-verified'.

@ArticMine
Copy link

@UkoeHB

So a factor of 8. This is at the very high end of my estimate.

@j-berman
Copy link

j-berman commented Feb 2, 2022

59,093 transactions per block
5.7ms to verify a transaction inside batches of 25 txs/batch
Time to verify a block single threaded = 59,093 txs * (5.7ms / 1000) = 336s
Time to verify a block 8 threads = 336s / 8 threads = 42s

@SamsungGalaxyPlayer
Copy link

SamsungGalaxyPlayer commented Feb 2, 2022

@ArticMine I don't really see a 14x max annual increase as a compromise really; it's still quite aggressive.

I'm okay with it because that approaches bitcoin's use (last I checked) in a short time period, but I honestly don't want to open the flood gates that far. Even 1.4 allows for >5x max increase. However, we can do a hard fork and adjust it down later if needed of course.

If I could waive wave a magic wand and implement something, it would allow for scaling up to about Bitcoin's transaction throughput quite easily, and then capping at something like a 25-50% growth rate per year following that. Not 5-30x.

@moneromooo-monero
Copy link

If I could waive a magic wand and implement something

Well, you waived that opportunity...

I'll get my coat...

@paulshapiro
Copy link

How does the conversation change if we use the space to increase fees to improve fungibility for non-2-count outputs by padding to 16 outs (and ins?), similarly to bulletproof construction already padding its representation of outs? There've been a couple research issues beginning to discuss this.

@SamsungGalaxyPlayer
Copy link

@paulshapiro I recommend leaving that component aside for now, since there is no meaningful push to take on that much inefficiency for output or input homogeneity.

@paulshapiro
Copy link

No meaningful push says who? That's bonkers.

@j-berman
Copy link

j-berman commented Feb 2, 2022

@SamsungGalaxyPlayer to be clear, the max annual increase for 1.7 is 813x, the max annual increase for 1.4 is 293x

1.7
300kb start
244mb end
244mb / (300kb / 1000) = 813x
1.4
300kb start
88mb end
88mb / (300kb / 1000) = 293x

@SamsungGalaxyPlayer
Copy link

@j-berman I think we were comparing different numbers, but I take your point that those allowances are extraordinarily generous. We need to be willing to eat some inefficiencies to allow transaction throughput to at least equal Bitcoin in my opinion, but beyond that, we can afford to be more conservative (though less conservative than Bitcoin). Bitcoin has proven in my view that it can work, though non-ideal of course.

It's completely infeasible to try and avoid a fee market however. Scaling the block size up should be thought of as a downward pressure on fees, not as a means of keeping fees near-zero.

@UkoeHB
Copy link
Author

UkoeHB commented Feb 2, 2022

How does the conversation change if we use the space to increase fees to improve fungibility for non-2-count outputs by padding to 16 outs (and ins?), similarly to bulletproof construction already padding its representation of outs? There've been a couple research issues beginning to discuss this.

Increasing the average cost of txs by 8x will reduce the max tx throughput of the network to 1/8.

@ArticMine
Copy link

ArticMine commented Feb 2, 2022

We can easily have a fee market simply by having nodes prioritize transaction relay based on the fee paid. Furthermore this does not require a consensus change. Many internet connections have a factor of 30x between downstream and upstream speeds, especially lower end cable and DSL. When one considers that Monero nodes need 12x the upstream speed to relay transactions to say 12 other nodes there is an effective 360x difference between the downstream and upstream speeds.

When we talk verification we need to seriously look at parallel processing not only on traditional GPUs (on a PCI-e or AGP slot), but also on integrated graphics, graphics chips, system on a chip etc. This covers not only desktops computers but also laptops and mobile devices. Again this does not require consensus.

As for a long term 50% growth rate this is, basically Nielsen's Law of Internet Bandwidth, that makes a lot of sense Now when we come to using Bitcoin s the startig point that is another matter. When the Bitcoin genesis block was mined in 2009 bandwidth was 200x slower. So 7 tps then is equivalent to 1400 tps today. We should not be baking in over a 12 years of Bitcoin inaction and conflict over scaling into the Monero blockchain.

@UkoeHB
Copy link
Author

UkoeHB commented Feb 2, 2022

We can easily have a fee market simply by having nodes prioritize transaction relay based on the fee paid. Furthermore this does not require a consensus change. Many internet connections have a factor of 30x between downstream and upstream speeds, especially lower end cable and DSL. When one considers that Monero nodes need 12x the upstream speed to relay transactions to say 12 other nodes there is an effective 360x difference between the downstream and upstream speeds.

In practice, won't users just make direct connections with pools, and won't pools make direct connections with each other? Rather than a fee market, you will just have a more centralized tx distribution.

@j-berman
Copy link

j-berman commented Feb 2, 2022

When we talk verification we need to seriously look at parallel processing not only on traditional GPUs (on a PCI-e or AGP slot), but also on integrated graphics, graphics chips, system on a chip etc. This covers not only desktops computers but also laptops and mobile devices. Again this does not require consensus.

Here is a GPU-optimized library for curve25519 that explicitly says:

Intel GPUs are not supported, as in most cases running the code on the integrated GPU is no faster than running it on the CPU.

I think the default assumption in the decision framework should be that this is not possible (edit: until proven otherwise), not that it is.

@ArticMine
Copy link

@UkoeHB

In practice, won't users just make direct connections with pools, and won't pools make direct connections with each other? Rather than a fee market, you will just have a more centralized tx distribution.

The same argument can be made for node relay fees. This is a danger if the restriction is not supported by the reality of bandwidth and actual real demand setting the node relay fees too high.

In some the scenarios that have been proposed the miners would also be under stress, and furthermore would have little or no financial incentive to go to the trouble of circumventing the nodes.

@j-berman

I think the default assumption in the decision framework should be that this is not possible (edit: until proven otherwise), not that it is.

So it does not work on a particular brand of GPUs, but it works on other brands. I see the relevant question here as why does it not work on Intel? I cannot see why this justifies the assumption that it is than it is not possible in general

@j-berman
Copy link

j-berman commented Feb 2, 2022

From @hyc:

that's been my experience with iGPUs in general. the fact that they're on the same main memory bus as the CPU certainly puts an upper bound on their perf, anyway

@UkoeHB
Copy link
Author

UkoeHB commented Feb 3, 2022

The same argument can be made for node relay fees.

  1. minimum node relay fees are set algorithmically
  2. minimum node relay fees are minuscule (fractions of a penny); if they were higher (e.g. multiple dollars per tx), then nodes/pools might have a financial incentive to chop out or reduce the relay algorithm

If pools see that a non-trivial volume of txs (and tx fees) are being lost due to slow nodes in the network, then they will have a clear incentive to offer/encourage users to directly upload their txs to them.

@ArticMine
Copy link

ArticMine commented Feb 3, 2022

Here is an article, by Peter Rizun, on a fee market caused by orphan blocks in a stressed network.A Transaction Fee Market Exists Without a Block Size Limit https://www.bitcoinunlimited.info/resources/feemarket.pdf

I would pay close attention on the result on page 12

Not unexpectedly, we showed that the cost of block space was proportional to both Bitcoin’s inflation rate, 𝑅/𝑇, and the amount of time it takes per uncompressed megabyte to propagate block solutions to the other miners, 1/𝛾𝐶.

In my view this works in Monero, but not in Bitcoin or Bitcoin Cash because of Monero's tail emission. One can think of the Rizun term as an additional term added to the penalty in Monero.

Edit 1: The Rizun term would only come into play if the blocksize has grown so large as to impact the probability of orphan blocks

Edit 2:

If pools see that a non-trivial volume of txs (and tx fees) are being lost due to slow nodes in the network, then they will have a clear incentive to offer/encourage users to directly upload their txs to them.

If the node relay fee is significantly above the real fee market (Monero penalty plus Rizun term) then I do agree with the above assessment. My point is that if the network is stressed to the point that a significant number of nodes cannot upload all the transactions, then there would be an impact on orphan blocks, and the Rizun term would be triggered. There would be little incentive for miners to circumvent the nodes and instead the miners could well demand additional fees. One situation where the above assessment would apply is if the ML is artificially suppressed causing fees to rise, which is one of the reasons why I have argued for 2 over 1,7 and even more over 1.4 .

@j-berman
Copy link

j-berman commented Feb 3, 2022

My point is that if the network is stressed to the point that a significant number of nodes cannot upload all the transactions, then there would be an impact on orphan blocks, and the Rizun term would be triggered. There would be little incentive for miners to circumvent the nodes and instead the miners could well demand additional fees. One situation where the above assessment would apply is if the ML is artificially suppressed causing fees to rise, which is one of the reasons why I have argued for 2 over 1,7 and even more over 1.4 .

Allowing the block size to grow more instead of less incentives miners not to actually increase the block size, because if miners increase the block size, it would cause a higher orphan rate, and therefore lost revenue to miners? I don't follow. I understand the point the paper is making (although it explicitly says on page 13 how it ignores the incentive created for well-connected miners), and I understand the penalty fee and its function to disincentive increasing the block size, but the point here isn't clicking for me

Additionally, as I understand the formula for the penalty fee, doesn't allowing the block size to grow more (i.e. allow for a higher Mn) lead to a smaller penalty (higher Mn == lower B == lower PB? In other words, it seems like allowing for a higher ML directly incentives larger blocks, so I'm not understanding the relation of how allowing block sizes to grow more would disincentive pools from circumventing nodes in the circumstance @UkoeHB described

EDIT: realized I mixed up variables

@moneromooo-monero
Copy link

moneromooo-monero commented Feb 3, 2022

That's a good point, and it makes me think that, as it is with 1.7, the penalty can't ever reach 100% of the block reward, which is still reached at twice the median. That was probably unintended.

nvm, I just confused two things

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants