Skip to content
This repository has been archived by the owner on Oct 9, 2019. It is now read-only.

2M base block size increase #11

Merged
merged 7 commits into from
Jun 16, 2017
Merged

2M base block size increase #11

merged 7 commits into from
Jun 16, 2017

Conversation

jgarzik
Copy link

@jgarzik jgarzik commented Jun 5, 2017

BIP 91 w/ bit-4 activation, also triggering a base block size increase (BIP 102).

See NOTES.segwit2x for caveats and WIP.

@opetruzel
Copy link

Nice work, Jeff! One question after a quick scan through the changes:

Was leaving MAX_BLOCK_WEIGHT = 4000000 in consensus.h intentional? Shouldn't that change to 8000000 when the 2MB hardfork activates? If I missed it elsewhere in the code, please slap me accordingly.

@kek-coin
Copy link

kek-coin commented Jun 5, 2017

Due to the planned "signalling start" on July 21 - which is assuming no delays in the schedule, I would argue for an expediated activation logic for BIP91; @jameshilliard suggested a 100-block rolling IsSuperMajority() style activation lock-in. Otherwise, with 10 days left before the BIP148-deadline, activating BIP91 in time to avoid a chainsplit seems unlikely.

@juscamarena
Copy link

@opetruzel seems it's in his notes: 'witness scale factor' TBD

@opetruzel
Copy link

@juscamarena. That makes sense then, thank you!

}

/** The maximum allowed number of transactions per block */
static const unsigned int MAX_BLOCK_VTX_SIZE = 1000000;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think MAX_BLOCK_VTX_SIZE is needed? Comment is the same as line 62.

Copy link
Author

@jgarzik jgarzik Jun 5, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct, MAX_BLOCK_VTX_SIZE is dead code and can be removed (or updated then vice versa)

@opetruzel
Copy link

opetruzel commented Jun 5, 2017

@AllanDoensen. Standard BIP141 with the current 1MB Base Size uses a fixed 3:1 ratio for MAX_BLOCK_WEIGHT.

So, with just BIP141 and 1MB base size, we get 1000000 bytes for Base Size and 3000000 bytes for witness data, Max Weight = 4000000 bytes.

With an increase of the Base Size to 2MB, while maintaining the same SegWit ratio/discount, we'd get 2000000 bytes for Base Size and 6000000 bytes for witness data, Max Weight = 8000000 bytes.

In testing, we saw that standard BIP141 SegWit with1MB Base Size results in ~2 - 2.1MB blocks during optimal usage. Using the numbers I described above for the 2MB base size, the expected results would become ~4 - 4.2MB blocks during optimal usage -- which is what I believe most in the community expect from the "SW+2MB" compromise we've been asking for all this time.

@kek-coin
Copy link

kek-coin commented Jun 5, 2017

Agree, messing with the segwit weight ratio seems like a massive bikeshed point, best to avoid it.

@maxweisspoker
Copy link

+1 to making sure the block weight is 8MB. Segwit2x is supposed to provide double the on-chain capacity that segwit alone would have. Don't fall for Luke's bullshit redefining of terms. "2x" means 2MB base size and 8MB weight. That's what it's always meant, that's what everybody understands it to mean, and that's what the New York agreement was understood to mean.

Jeff, thank you for putting in so much effort to get this out so quickly. We all very much appreciate your efforts.

@hmsln
Copy link

hmsln commented Jun 5, 2017

Due to the planned "signalling start" on July 21 - which is assuming no delays in the schedule, I would argue for an expediated activation logic for BIP91; @jameshilliard suggested a 100-block rolling IsSuperMajority() style activation lock-in. Otherwise, with 10 days left before the BIP148-deadline, activating BIP91 in time to avoid a chainsplit seems unlikely.

I absolutely agree with this; it is of paramount importance that the final implementation of the Segwit2x compromise be given enough time to be activated before the scheduled UASF, if it has the support the signatories.

If more than 80% of hashrate were to signal in favour of the Segwit2x compromise, only to have a minority-hashrate UASF take place and split the network a few days before the end of the signalling period, the ensuing political mayhem would make rebuilding consensus extremely difficult afterwards.

@kek-coin
Copy link

kek-coin commented Jun 5, 2017

@maxweisspoker under the btc1 principle of assume good faith I will assume you are simply misinformed and not maliciously trying to strawman Luke to give him a bad name. Luke's 2MB proposal seems to have been in protest against the misleading name of the original Segwit2MB proposal which the current Segwit2X is based on as well as the terms of the DCG agreement.

  • Activate Segregated Witness at an 80% threshold, signaling at bit 4
  • Activate a 2 MB hard fork within six months

@luke-jr
Copy link

luke-jr commented Jun 5, 2017

Indeed, it seems trying to push for 8 MB blocks is a clear violation of the DCG agreement.

@tomasvdw
Copy link

tomasvdw commented Jun 5, 2017

If I read it correctly, the parameters for activation of the HF are:

  • If SegWit2X activates before ~sep 20, activate HF on ~sep 20.
  • Otherwise, activate directly when SegWit2X activates.

Is this correct and definitive?

@hoffmabc
Copy link

hoffmabc commented Jun 5, 2017

I think the word you're looking for @kek-coin is trolling. Playing semantics with the wording 2MB is not going to change the reality. It is my understanding that 2MB refers to base block size.

@drunkenclam
Copy link

drunkenclam commented Jun 5, 2017

The 3:1 Ratio for the witnesses (=75% discount) is based on a 1MB base blocksize. With a new 2MB base blocksize after the HF, the witness ratio should be reduced to 2:1 (50% discount).

That would give us ~3MB blocks and a maxweight of 6MB.

@kek-coin
Copy link

kek-coin commented Jun 5, 2017

@hoffmabc please abide by the principles as set out here. Accusing someone of trolling does not strike me as particularly friendly, patient, welcoming, professional or in assumption of good faith. I am not sure if you would like to be treated that way, but let's try not to turn this PR into another battle of rhetoric, it is counter-productive.

@hoffmabc
Copy link

hoffmabc commented Jun 5, 2017

Based on behavior elsewhere it is not assumed but known. If you or Luke-jr really care to have reasonable discourse (I'll assume you do) then I think not instigating semantic battles is a good start. If you truly think that the point of this was to deliver what segwit already does with no base block size increase and that the folks here are clueless that segwit provides an overall increase please just state it so we can move on from that incorrect assumption.

I really don't think the witness data discount should be decreased personally.

@magmahindenburg
Copy link

The excessive SegWit discount at 75% has been one of the things SegWit critics have complained about. Reducing or removing the discount would satisfy many of the SegWit critics.

@kek-coin
Copy link

kek-coin commented Jun 5, 2017

@hoffmabc I'll point out that neither me nor Luke brought this topic up, people brought their grudges here of their own accord. All I've stated is that I personally don't see a reason to change the blockweight metric as it has a clear and defensible rationale behind it, so logically it follows that this translates into a maximum block weight of 8MWU given a 2X increase over BIP141. Just to clarify; I assume you were not intentionally strawmanning my opinion.

If we are going to go beyond the "minimal changes to satisfy the DCG agreement" then I think bikeshedding the blockweight metric is a far less interesting use of our time than other improvements such as Spoonnet, perhaps make the difficulty retarget logic a moving average rather than a fixed window, etc. etc.

@drunkenclam
Copy link

@magmahindenburg
Higher witness discount = bigger blocks
So you are in favour of 2MB blocks (with 0% witness discount) instead of 6-8MB blocks (with 50-75% discount)? Oh the irony... :)

@hsjoberg
Copy link

hsjoberg commented Jun 5, 2017

Very good, thanks Jeff!

inline int64_t MaxBlockSigOpsCost(int nHeight, bool fSegWitActive)
{
if (!fSegWitActive)
return (MAX_BLOCK_BASE_SIGOPS * 4 /* WITNESS_SCALE_FACTOR */);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why take WITNESS_SCALE_FACTOR into account when segwit is not active?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current code base already uses WEIGHT instead of SIZE to verify the block size. It is effectively a MAX_SIZE=1mb as long as there is no witness data.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is about SIGOPS.

Copy link
Author

@jgarzik jgarzik Jun 5, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code elsewhere looks for "cost", which carries a built-in scale factor.

The code that calls MacBlockSigOpsCost() applies a divisor, if it's pre-SegWit.

I agree the code is not very clear; my modifications are in-line with the current SegWit code behavior: Store scaled cost, then apply divisor to achieve non-SegWit sigops.

@kek-coin
Copy link

kek-coin commented Jun 5, 2017

@jtimon where is this bug located? From the MaxBlockBaseSize function definition this seems not the intention.

    if (!fSegWitActive)
        return MAX_LEGACY_BLOCK_SIZE;

@jtimon
Copy link

jtimon commented Jun 5, 2017

Is BIP102_FORK_MIN_HEIGHT for the hf activation something temporary?
At a minimum I think it should move to Consensus::Params, but as said I would recommend deployment using bip8 (plus hf bit) instead.

@kek-coin
Copy link

kek-coin commented Jun 5, 2017

Seems like the code is written quite sloppily, but to my reading the blocksize-part of the HF is based on the condition

fSegWitActive && nHeight >= BIP102_FORK_MIN_HEIGHT

@@ -1851,6 +1851,18 @@ bool ConnectBlock(const CBlock& block, CValidationState& state, CBlockIndex* pin
flags |= SCRIPT_VERIFY_NULLDUMMY;
}

// SEGWIT2X signalling.
if ( VersionBitsState(pindex->pprev, chainparams.GetConsensus(), Consensus::DEPLOYMENT_SEGWIT2X, versionbitscache) == THRESHOLD_ACTIVE &&
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could this safely be changed to VersionBitsState(...) == THRESHOLD_LOCKED_IN || VersionBitsState(...) == THRESHOLD_ACTIVE rather than just VersionBitsState(...) == THRESHOLD_ACTIVE to speed things up a bit? As far as I can tell, keeping this as is would delay BIP141 enforcement by an additional full difficulty adjustment period, and I'm not sure what the benefit would be.

Copy link

@tomasvdw tomasvdw Jun 5, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could be orphaning the blocks of 20% honest miners that did not update within 2 weeks. That is rather steep IMO.

It makes sense to have a period between when 80% is upgraded until activation to allow for the other 20% to join.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True, it would mean miners would have to make sure to upgrade before THRESHOLD_LOCKED_IN rather than be given an additional 14 day grace period. However, it would also increase the chance of being compatible with BIP148 nodes by about 14 days, and would potentially increase community support by about that fraction of people. I'm not sure if it's worth the risk, but it is worth considering.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Activation before August 1 is already infeasible and not part of the agreement.

Removing the LOCKED_IN sets a terrible precedent as every update is voluntary. You would completely remove the upgrade window for miners who disagree but don't want to lose money by not signalling until LOCKED_IN.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The BIP9 signaling nStartTime as per this PR is June 1st (1496275200), so unless this is changed to match the arbitrary date in the agreement, it would still be feasible to lock in SegWit2x prior to August 1st. Your other point is fair enough, however.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. Not the end of the world, just seems like a waste of two weeks of signaling time. What would be the downside of signaling bit 1 the moment bit 4 is locked in? Is it too complex a change in the code?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jacob-eliosoff The purpose of an additional 2 weeks LOCKED_IN period as specified by BIP9, is that it allows miners who disagree with the upgrade, to upgrade when it is locked in.

If i disagree with this upgrade, i don't want to signal even if it is as simple as setting a flag in my stratum servers as I don't want my bit to be counted towards the 80%.

I might want to change my mind when it turns out 80% does, in which case I don't want to be orphaned. To achieve this, a LOCKED_IN period is required, and 2 weeks is reasonable.

Copy link

@jacob-eliosoff jacob-eliosoff Jun 8, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Saicere Yes, good point, miners could manually flip on bit 1 as soon as they see bit 4 locked in. Just figured making that automatic might get BIP141 locked in one difficulty adjustment sooner. As I understand the current code we could be looking at 6 weeks from bit 4 reaching 80% to segwit activation, which seems unnecessary.

Edit: maybe even 8 weeks! (T0 bit 4 reaches 80%; T0+2w bit 4 locks in; T0+4w bit 4 activates & bit 1 signaling starts; T0+6w bit 1 locks in; T0+8w segwit activates)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tomasvdw My point is that locked-in period makes perfect sense when "activation" means a feature (like segwit). It makes less sense if BIP91 "activation" just means starting signaling for BIP141 activation. This is why I was advocating separating the start-signaling date from the start-orphaning date.

Copy link

@Saicere Saicere Jun 9, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I might want to change my mind when it turns out 80% does, in which case I don't want to be orphaned. To achieve this, a LOCKED_IN period is required, and 2 weeks is reasonable.

One thought I had is that since BIP141 will lock in after signaling 1916 out of the 2016 blocks in a difficulty readjustment period, it would be possible to start enforcing during the LOCKED_IN state and give miners a short upgrade period by delaying the enforcement by ~100 blocks, which would still guarantee BIP141 lock-in by the next readjustment. So something like (state == ACTIVE || (state == LOCKED_IN && blockheight % 2016 >= 100))

That would of course only give a grace period of about 16 hours depending on block luck, rather than the approximately two weeks. I expect miners will be watching this closely, so this may be sufficient, but again, is of course still riskier orphaning-wise than waiting a full period.

@sneurlax
Copy link

sneurlax commented Jun 5, 2017

ACK Edit: @kek-coin, BIP91 inclusion in Segwit2x is an improvement over its current state, is all.

@JaredR26
Copy link

JaredR26 commented Jun 5, 2017

So, with just BIP141 and 1MB bae size, we get 1000000 bytes for Base Size and 3000000 bytes for witness data, Max Weight = 4000000 bytes.

With an increase of the Base Size to 2MB, while maintaining the same SegWit ratio/discount, we'd get 2000000 bytes for Base Size and 6000000 bytes for witness data, Max Weight = 8000000 bytes.

In testing, we saw that standard BIP141 SegWit with1MB Base Size results in ~2 - 2.1MB blocks during optimal usage. Using the numbers I described above for the 2MB base size, the expected results would become ~4 - 4.2MB blocks during optimal usage -- which is what I believe most in the community expect from the "SW+2MB" compromise we've been asking for all this time.

If we are going to go beyond the "minimal changes to satisfy the DCG agreement" then I think bikeshedding the blockweight metric is a far less interesting use

It can easily be argued here, by either side, that either a change to the maxblockweight metric could be "bikeshedding", OR NOT changing the maxblockweight metric could be "bikeshedding."

There is no clarity in the agreement in whether it is desired to double that or not. There is also no clarity from the community whether such a thing is necessary, desired, or opposed.

I believe Luke-jr originally suggested clamping maxblockweight to 2.0mb, which definitely appears to fall outside the intention of the compromise.

How about a compromise, then? If actual testing of optimal use of segwit shows a 2.1x multiplier, an 8x maxblocksize metric is most likely going to serve no purpose. The maxblockweight metric could be left as 4.0, 4.2, or 4.5mb as a compromise with those who fear runaway blocksizes. As the expected impact of that is either an arbitrary limit of 0.0-0.2mb, or else 0.0mb, it does truly seem like a small imposition to add. If 8.0mb is unlikely to ever be hit, it would be splitting hairs to argue over maintaining that limit. Moreover, a limit of 4.0, 4.2, or 4.5 would also help tamp down the rhetoric about the segwit discount, as the hypothetical discount would be halved.

Thoughts? Is there some benefit to the 4.5-8.0 range of maxblockweight that isn't clear in the discussions so far?

@opetruzel
Copy link

This being a hardfork presents us with an opportunity to tweak SegWit's ratio, discount, and Max weight variables, such that their impact is not as severe or sudden at the time of hardfork activation.

In April, I submitted a draft proposal to the mailing list that introduced the concept of "Discount Governing." Put simply, it's just a hard-coded schedule of increases for both the Discount and Max Weight variables. Seen here:
https://www.reddit.com/r/Bitcoin/comments/63m2sn/bip_draft_base_size_increase_and_segregated/

The scheduled increases found in that proposal assumed that BIP141 was not activated first, so it began with NO SegWit discount (Base Size and Max Weight both set to 2000000 bytes) and increased from there every 24,192 blocks (roughly 6 months). Each incremental increase provided: +12.5% to Discount and +1000000 bytes to Total Weight (Referred to hereafter as Max Weight).

Since this new SegWit2x involves activating the BIP141 softfork first, we could skip to the "FD+48,384 blocks" as a starting point -- therefore beginning with Discount = 25% and Max Weight = 4000000 bytes.

The rest of the increases could then continue to follow the schedule described in the proposal's table, ending with the Discount = 75%, Max Weight = 8000000 bytes after approximately Flag Day + 2 years.

This "governing" of the increases in both Discount and Max Weight may satisfy all parties who have concerns about either the Discount or Max Weight increasing too dramatically at the start.

@opetruzel
Copy link

@JaredR26 - my only concern with installing a hard cap for Max Weight at 4500000 bytes, or similar, is that we don't yet know what impact other future improvements may have on the optimal results. Upcoming developments like Schmorr signatures and Aggregated Signatures may result in blocks that go beyond that figure. Has anyone gamed what the results of such efficiencies might be with BIP141? (In terms of block size).

Future developments in layer 2 scaling may also result in more complex SegWit transactions becoming the norm (more multisig tx, etc), thus pushing beyond the 2.1x increases we see with just BIP141 under normal usage.

The beauty of either setting it at 8000000 from the start, or using the gradual increases I described in my Discount Governing proposal, is that the Max Weight can be locked in at anything lower than 8000000 bytes using a simple soft fork (if real world results show us that we don't need the additional room).

@JaredR26
Copy link

JaredR26 commented Jun 5, 2017

@JaredR26 - my only concern with installing a hard cap for Max Weight at 4500000 bytes, or similar, is that we don't yet know what impact other future improvements may have on the optimal results

A valid point. I think your gradual increase proposal is a good one, the only question is would those who prefer a lower maxblockweight find that to be an acceptable schedule/process?

@jgarzik
Copy link
Author

jgarzik commented Jun 15, 2017

Updated with seasoning requirement per issue #14

@@ -15,7 +15,7 @@ static const unsigned int BIP102_FORK_BUFFER = (144 * 90);
/** The maximum allowed size for a serialized block, in bytes (only for buffer size limits) */
static const unsigned int MAX_BLOCK_SERIALIZED_SIZE = (8 * 1000 * 1000);
/** The maximum allowed weight for a block, see BIP 141 (network rule) */
static const unsigned int MAX_BLOCK_WEIGHT = 4000000;
static const unsigned int MAX_BLOCK_WEIGHT = (8 * 1000 * 1000);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jgarzik
shouldn't BIP102active = true before this changes to 8M? I believe it should remain at 4M for the SegWit softfork, and then only change to 8M with the hardfork.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The base block size limit seems to keep it clamped in practice; at the same time, to make things clearer, it is reasonable to canonicalize (if slightly less efficient).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To tell you the truth, I'm not sure what effect, if any, the MAX_BLOCK_WEIGHT and MAX_BLOCK_SERIALIZED_SIZE would have if they're each raised to 8M with the softfork. I considered the fact that they may be clamped in practice, but I've never tested BIP141 with higher weight limits. It's just something to consider.

Copy link

@opetruzel opetruzel Jun 16, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jgarzik
I've thought about it some more. While the largest SegWit block I've seen in testing was only ~3.7MB, it may be possible to create a complex enough series of transactions that result in a SegWit block larger than 4MB. If that were to happen with the higher 8M limit, the result would be considered an invalid block by all other SegWit nodes that are not SegWit2x. For that reason, I'd personally ere on the side of caution and keep the limit at 4M with the softfork, only increasing to 8M with the hardfork.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NACK, this should be updated only if the HF activates.

@jgarzik jgarzik changed the title [WIP] 2M base block size increase 2M base block size increase Jun 16, 2017
@jgarzik
Copy link
Author

jgarzik commented Jun 16, 2017

With the finalization of the block weight detail, removing the "[WIP]" label.

@opetruzel
Copy link

ACK for MaxBlockWeight change. Nicely done.

@luke-jr
Copy link

luke-jr commented Jun 16, 2017

This makes blocks up to 8 MB. NACK, the agreement was for 2 MB only.

@mjamin
Copy link

mjamin commented Jun 16, 2017

@luke-jr the agreement is clearly for a 2MB hard fork plus SegWit.

If you want to attack the agreement for the imprecise language that it has and the confusion around it then you should do that explicitly, IMHO.

@luke-jr
Copy link

luke-jr commented Jun 16, 2017

2 MB hardfork, I get that. Segwit, sounds great. But that doesn't in any way imply an 8 MB hardfork.

@opetruzel
Copy link

@luke-jr
You know as well as everyone here that the blocks will rarely be larger than 4MB with normal transaction behavior, and I've never seen you refer to BIP141 SegWit as a 4MB fork. You've consistently stated that SegWit provides ~2.1MB blocks, and SegWit2x simply doubles that result -- as expected.

@jli225
Copy link

jli225 commented Jun 16, 2017

@luke-jr Luke, nonsense. Please curb yourself and don't troll here by spamming specious misinformation with naive word trick. At least you can try to be honest to yourself, although it's too tough for you. That's why you were called a poisonous guy.

PS: I will delete this comment immediately after Luke's bad faith post is deleted. Sorry for this distracting post. We have all known how such trolls corrupted the development and friendly atmosphere since years ago.

@jacob-eliosoff
Copy link

As discussed umpteen times both here and previously, the original BIP141 segwit soft fork increased the maximum blocksize to 4MB, so of course Segwit2x should increase the theoretical max blocksize to 8MB. It is really sheer bad-faith obstructiveness to discuss this here any further.

@stefment
Copy link

stefment commented Jun 16, 2017

How is it bad faith to discuss a change to bitcoin, particularly if you think the change will be harmful? The NY agreement is not law & the less they listen the worse their chances of a succesful hardfork will be imo.

@mpatc
Copy link

mpatc commented Jun 16, 2017

I feel like this discussion is way too long, and certain people are doing this intentionally. Certain people have shown an amazing ability to both 'keep the conversation technical' and troll at the same time.

Whether or not this lives up to some agreement is outside the scope of this PR and belongs on r/btc, because the other one censors.

I think it's better if we come up with rules on trolling ASAP, to keep the toxicity at bay.

@benkloester
Copy link

benkloester commented Jun 16, 2017

@stefment At the least, it's out of scope for this project, which aims to create an implementation compatible with the NY agreeement, such as we understand it.

Segwit as implemented in core makes the block weight 4mb. To come and say:

This makes blocks up to 8 MB. NACK, the agreement was for 2 MB only.

is completely bad faith, because it requires applying different standards to your own work vs others. You don't catch Luke-jr saying Segwit "makes blocks up to 4MB", so it's disingenuous to say this makes 8MB blocks.

Either Segwit has 1MB blocks, and this is an increase to 2MB, or Segwit makes 4MB blocks, and this makes 8MB blocks. Either way this is doubling capacity in a manner orthogonal to Segwit, which is how I interpret the 2MB section of the NY agreeement.

@kek-coin
Copy link

Whether or not this lives up to some agreement is outside the scope of this PR and belongs on r/btc, because the other one censors.

s/censors/moderates/

@jgarzik jgarzik merged commit c11c37b into segwit2x Jun 16, 2017
@jgarzik jgarzik deleted the 2017_2m_blocksize_v2 branch June 16, 2017 14:20
@@ -27,7 +27,7 @@ static const unsigned int MAX_STANDARD_TX_WEIGHT = 400000;
/** Maximum number of signature check operations in an IsStandard() P2SH script */
static const unsigned int MAX_P2SH_SIGOPS = 15;
/** The maximum number of sigops we're willing to relay/mine in a single tx */
static const unsigned int MAX_STANDARD_TX_SIGOPS_COST = MAX_BLOCK_SIGOPS_COST/5;
static const unsigned int MAX_STANDARD_TX_SIGOPS_COST = 16000;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where is that 16000 number come from? Should it have a comment?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's the same value as before.

@deadalnix
Copy link

There are no tests. I have no idea why this gets 221 comments. A patch coming with no tests would see itself rejected on that basis only in most projects. That is especially true if the code in question is managing consensus for a $30B economy.

@jgarzik
Copy link
Author

jgarzik commented Jul 8, 2017

@deadalnix

  1. There are tests, of both blocksize and transaction size.
  2. More coming, for sure
  3. Activation tests are performed on testnet5 and in simulation & not reflected in unit tests.

hovah pushed a commit to hovah/bitcoin that referenced this pull request Aug 11, 2017
c521b3a Merge btc1#11: fixup define checks. Cleans up some oopses from btc1#5.
8b1cd37 fixup define checks. Cleans up some oopses from btc1#5.
6b1508d Merge btc1#6: Fixes typo
fceb805 Merge btc1#10: Clean up compile-time warnings (gcc 7.1)
0ec2a34 Clean up compile-time warnings (gcc 7.1)
d4c268a Merge btc1#5: Move helper functions out of sse4.2 object
8d4eb08 Add HasAcceleratedCRC32C to port_win.h
77cfbfd25 crc32: move helper functions out of port_posix_sse.cc
4c1e9e0 silence compiler warnings about uninitialized variables
4953164 Merge btc1#2: Prefer std::atomic over MemoryBarrier
2953978 Fixes typo
f134284 Merge btc1#1: Merge upstream LevelDB 1.20
ba8a445 Prefer std::atomic over MemoryBarrier

git-subtree-dir: src/leveldb
git-subtree-split: c521b3a
@ghost ghost mentioned this pull request Dec 6, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet