New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testnet stuck on block 27070 #65

Closed
nomnombtc opened this Issue Jul 10, 2017 · 103 comments

Comments

Projects
None yet
@nomnombtc

nomnombtc commented Jul 10, 2017

Hello,

it seems someone mined a bunch of blocks yesterday, which maybe(?) triggered
the BIP102 rule that it rejects blocks that are too small. But whoever did it did not
follow up with an actual bigger block, so two of my nodes rejected it with "bad-blk-length-toosmall"
banned a ton of nodes and are stuck now on block 27070.

I also have a BitcoinUnlimited node up on testnet5 which has no such reject builtin
and this one is now on block 32767...

2017-07-09 21:26:53 UpdateTip: new best=0000000035a7b078c8b54e33b496dcbd66f8d52049da3684d80291d1cc13f29a height=27070 version=0x20000000 log2_w
ork=56.568487 tx=1081252 date='2017-07-09 21:26:51' progress=1.000000 cache=0.3MiB(1034tx)
2017-07-09 21:26:53 ERROR: AcceptBlock: bad-blk-length-toosmall, size limits failed (code 16)
2017-07-09 21:26:53 ERROR: ProcessNewBlock: AcceptBlock FAILED
2017-07-09 21:26:53 ERROR: AcceptBlockHeader: block 0000000016b2af58fc30fef380a4cec7262858e68aaa0bad37a9c419257d0636 is marked invalid
2017-07-09 21:26:53 ERROR: invalid header received
2017-07-09 21:26:53 ProcessMessages(headers, 82 bytes) FAILED peer=10
@christophebiocca

This comment has been minimized.

Show comment
Hide comment
@christophebiocca

christophebiocca Jul 10, 2017

Can you confirm that your node banned its peers for relaying a too-small block? Run getpeerinfo and paste the output here if you can't interpret it?

christophebiocca commented Jul 10, 2017

Can you confirm that your node banned its peers for relaying a too-small block? Run getpeerinfo and paste the output here if you can't interpret it?

@nomnombtc

This comment has been minimized.

Show comment
Hide comment
@nomnombtc

nomnombtc Jul 10, 2017

I think it banned mostly nodes that were running slightly older versions like 1.14.1, or also my BU node (5.135.186.15). (because they kept extending on this chain without rejecting the block, the BU node is now on block 32826)

getpeerinfo: https://pastebin.com/Au8CDaDs

listbanned:

[
  {
    "address": "5.135.186.15/32",
    "banned_until": 1499722014,
    "ban_created": 1499635614,
    "ban_reason": "node misbehaving"
  }, 
  {
    "address": "52.168.136.58/32",
    "banned_until": 1499722014,
    "ban_created": 1499635614,
    "ban_reason": "node misbehaving"
  }, 
  {
    "address": "52.213.126.249/32",
    "banned_until": 1499722014,
    "ban_created": 1499635614,
    "ban_reason": "node misbehaving"
  }, 
  {
    "address": "82.201.92.138/32",
    "banned_until": 1499722014,
    "ban_created": 1499635614,
    "ban_reason": "node misbehaving"
  }, 
  {
    "address": "101.37.33.210/32",
    "banned_until": 1499722014,
    "ban_created": 1499635614,
    "ban_reason": "node misbehaving"
  }, 
  {
    "address": "101.99.31.77/32",
    "banned_until": 1499722014,
    "ban_created": 1499635614,
    "ban_reason": "node misbehaving"
  }, 
  {
    "address": "104.239.175.108/32",
    "banned_until": 1499722014,
    "ban_created": 1499635614,
    "ban_reason": "node misbehaving"
  }, 
  {
    "address": "115.75.4.185/32",
    "banned_until": 1499722044,
    "ban_created": 1499635644,
    "ban_reason": "node misbehaving"
  }, 
  {
    "address": "153.169.136.25/32",
    "banned_until": 1499722014,
    "ban_created": 1499635614,
    "ban_reason": "node misbehaving"
  }, 
  {
    "address": "195.154.69.36/32",
    "banned_until": 1499722014,
    "ban_created": 1499635614,
    "ban_reason": "node misbehaving"
  }, 
  {
    "address": "2001:41d0:8:c30f::1/128",
    "banned_until": 1499722014,
    "ban_created": 1499635614,
    "ban_reason": "node misbehaving"
  }
]

nomnombtc commented Jul 10, 2017

I think it banned mostly nodes that were running slightly older versions like 1.14.1, or also my BU node (5.135.186.15). (because they kept extending on this chain without rejecting the block, the BU node is now on block 32826)

getpeerinfo: https://pastebin.com/Au8CDaDs

listbanned:

[
  {
    "address": "5.135.186.15/32",
    "banned_until": 1499722014,
    "ban_created": 1499635614,
    "ban_reason": "node misbehaving"
  }, 
  {
    "address": "52.168.136.58/32",
    "banned_until": 1499722014,
    "ban_created": 1499635614,
    "ban_reason": "node misbehaving"
  }, 
  {
    "address": "52.213.126.249/32",
    "banned_until": 1499722014,
    "ban_created": 1499635614,
    "ban_reason": "node misbehaving"
  }, 
  {
    "address": "82.201.92.138/32",
    "banned_until": 1499722014,
    "ban_created": 1499635614,
    "ban_reason": "node misbehaving"
  }, 
  {
    "address": "101.37.33.210/32",
    "banned_until": 1499722014,
    "ban_created": 1499635614,
    "ban_reason": "node misbehaving"
  }, 
  {
    "address": "101.99.31.77/32",
    "banned_until": 1499722014,
    "ban_created": 1499635614,
    "ban_reason": "node misbehaving"
  }, 
  {
    "address": "104.239.175.108/32",
    "banned_until": 1499722014,
    "ban_created": 1499635614,
    "ban_reason": "node misbehaving"
  }, 
  {
    "address": "115.75.4.185/32",
    "banned_until": 1499722044,
    "ban_created": 1499635644,
    "ban_reason": "node misbehaving"
  }, 
  {
    "address": "153.169.136.25/32",
    "banned_until": 1499722014,
    "ban_created": 1499635614,
    "ban_reason": "node misbehaving"
  }, 
  {
    "address": "195.154.69.36/32",
    "banned_until": 1499722014,
    "ban_created": 1499635614,
    "ban_reason": "node misbehaving"
  }, 
  {
    "address": "2001:41d0:8:c30f::1/128",
    "banned_until": 1499722014,
    "ban_created": 1499635614,
    "ban_reason": "node misbehaving"
  }
]
@christophebiocca

This comment has been minimized.

Show comment
Hide comment
@christophebiocca

christophebiocca Jul 10, 2017

It turns out the DOS-ban is intentional, unlike in the first draft of the hard-fork-on-block-X PR.

Given that, what you're seeing is the intentional outcome of wipeout protection, except that no blocks are being built by segwit2x nodes for the testnet. The chain is forked permanently (at least for 1MB nodes, BU might switch back and forth based on which chain is the longest).

christophebiocca commented Jul 10, 2017

It turns out the DOS-ban is intentional, unlike in the first draft of the hard-fork-on-block-X PR.

Given that, what you're seeing is the intentional outcome of wipeout protection, except that no blocks are being built by segwit2x nodes for the testnet. The chain is forked permanently (at least for 1MB nodes, BU might switch back and forth based on which chain is the longest).

@nomnombtc

This comment has been minimized.

Show comment
Hide comment
@nomnombtc

nomnombtc Jul 10, 2017

Ok, so the code seems to work like it should. This is still a strange event,
I would have expected that the miners are prepared that this condition could happen,
so they would be ready to mine the >1MB block in time.

I guess a miner needs to create a big block now for the chain to continue...

nomnombtc commented Jul 10, 2017

Ok, so the code seems to work like it should. This is still a strange event,
I would have expected that the miners are prepared that this condition could happen,
so they would be ready to mine the >1MB block in time.

I guess a miner needs to create a big block now for the chain to continue...

@jheathco

This comment has been minimized.

Show comment
Hide comment
@jheathco

jheathco Jul 10, 2017

On https://testnet5.blockchain.info/ it appears to be continuing to follow the fork with 1mb blocks while http://btcfaucet.ix28uktqsp.us-west-2.elasticbeanstalk.com/ (which I've compiled to be running the latest beta release) appears to be on the chain stuck on block 27070.

Assuming as @nomnombtc stated we're just waiting for a miner to create a single block > 1mb to continue this blockchain. I'm not sure what solution was selected to ensure we've got this covered when live - assuming we'll have enough of a backlog of transactions in the mempool doesn't seem like a reliable option.

jheathco commented Jul 10, 2017

On https://testnet5.blockchain.info/ it appears to be continuing to follow the fork with 1mb blocks while http://btcfaucet.ix28uktqsp.us-west-2.elasticbeanstalk.com/ (which I've compiled to be running the latest beta release) appears to be on the chain stuck on block 27070.

Assuming as @nomnombtc stated we're just waiting for a miner to create a single block > 1mb to continue this blockchain. I'm not sure what solution was selected to ensure we've got this covered when live - assuming we'll have enough of a backlog of transactions in the mempool doesn't seem like a reliable option.

@betawaffle

This comment has been minimized.

Show comment
Hide comment
@betawaffle

betawaffle Jul 10, 2017

How long has it been stuck? Is anybody actually mining 2x on testnet5?

betawaffle commented Jul 10, 2017

How long has it been stuck? Is anybody actually mining 2x on testnet5?

@jheathco

This comment has been minimized.

Show comment
Hide comment
@jheathco

jheathco Jul 10, 2017

There are transactions in the mempool and I'm assuming there are still miners on testnet5 - likely just waiting on enough transactions to occupy a > 1mb block.

jheathco commented Jul 10, 2017

There are transactions in the mempool and I'm assuming there are still miners on testnet5 - likely just waiting on enough transactions to occupy a > 1mb block.

@betawaffle

This comment has been minimized.

Show comment
Hide comment
@betawaffle

betawaffle Jul 10, 2017

Does anyone have mempool stats?

betawaffle commented Jul 10, 2017

Does anyone have mempool stats?

@sneurlax

This comment has been minimized.

Show comment
Hide comment
@sneurlax

sneurlax Jul 10, 2017

Should have used the hardfork bit instead of a hacky kludge 🙃

sneurlax commented Jul 10, 2017

Should have used the hardfork bit instead of a hacky kludge 🙃

@betawaffle

This comment has been minimized.

Show comment
Hide comment
@betawaffle

betawaffle Jul 10, 2017

Have there seriously no miners or testers for the past nearly 24 hours!?

betawaffle commented Jul 10, 2017

Have there seriously no miners or testers for the past nearly 24 hours!?

@JaredR26

This comment has been minimized.

Show comment
Hide comment
@JaredR26

JaredR26 Jul 10, 2017

Should have used the hardfork bit instead of a hacky kludge

This is only a problem on testnet where there are not enough transactions to reach the limit easily without scripting spam.

JaredR26 commented Jul 10, 2017

Should have used the hardfork bit instead of a hacky kludge

This is only a problem on testnet where there are not enough transactions to reach the limit easily without scripting spam.

@betawaffle

This comment has been minimized.

Show comment
Hide comment
@betawaffle

betawaffle Jul 10, 2017

And nobody planned on scripting spam (aka. testing)?

betawaffle commented Jul 10, 2017

And nobody planned on scripting spam (aka. testing)?

@GratefulTony

This comment has been minimized.

Show comment
Hide comment
@GratefulTony

GratefulTony Jul 10, 2017

So basically, we're supposed to disregard this bad testing outcome [FAIL] because the test environment wasn't properly thought out?

GratefulTony commented Jul 10, 2017

So basically, we're supposed to disregard this bad testing outcome [FAIL] because the test environment wasn't properly thought out?

@JaredR26

This comment has been minimized.

Show comment
Hide comment
@JaredR26

JaredR26 Jul 10, 2017

Pay attention to the facts people. The non-segwit2x test chain was 5,697 blocks ahead 7 hours ago. Someone jumped on testnet with a modern ASIC miner under the non-segwit2x codebase and mined nearly 6000 blocks in less than 1 day. Someone is screwing with the chain intentionally before the devs intended to start the test.

So yes, you're supposed to disregard attacks that can't possibly happen on mainnet being done by trolls. Or you can just continue to attack the project for lulz, no one with any critical thinking skills is going to listen to you after looking at the facts.

JaredR26 commented Jul 10, 2017

Pay attention to the facts people. The non-segwit2x test chain was 5,697 blocks ahead 7 hours ago. Someone jumped on testnet with a modern ASIC miner under the non-segwit2x codebase and mined nearly 6000 blocks in less than 1 day. Someone is screwing with the chain intentionally before the devs intended to start the test.

So yes, you're supposed to disregard attacks that can't possibly happen on mainnet being done by trolls. Or you can just continue to attack the project for lulz, no one with any critical thinking skills is going to listen to you after looking at the facts.

@GratefulTony

This comment has been minimized.

Show comment
Hide comment
@GratefulTony

GratefulTony Jul 10, 2017

So how are we going to test this under adversarial conditions? or do we conclude that "testing is impossible"?

GratefulTony commented Jul 10, 2017

So how are we going to test this under adversarial conditions? or do we conclude that "testing is impossible"?

@JaredR26

This comment has been minimized.

Show comment
Hide comment
@JaredR26

JaredR26 Jul 10, 2017

So how are we going to test this under adversarial conditions? or do we conclude that "testing is impossible"?

Please describe an adversarial condition where the main blockchain could get 6,000 blocks (three difficulty changes) in a 24 hour period with an empty memory pool. After you do so, we can discuss how to test that situation.

JaredR26 commented Jul 10, 2017

So how are we going to test this under adversarial conditions? or do we conclude that "testing is impossible"?

Please describe an adversarial condition where the main blockchain could get 6,000 blocks (three difficulty changes) in a 24 hour period with an empty memory pool. After you do so, we can discuss how to test that situation.

@buzztiaan

This comment has been minimized.

Show comment
Hide comment
@buzztiaan

buzztiaan Jul 10, 2017

lol, issues a 5 usd altcoin can overcome are too big for btc1

we0drqn

buzztiaan commented Jul 10, 2017

lol, issues a 5 usd altcoin can overcome are too big for btc1

we0drqn

@jheathco

This comment has been minimized.

Show comment
Hide comment
@jheathco

jheathco Jul 10, 2017

@GratefulTony once the first block of > 1mb is created the blockchain will continue as normal. What @JaredR26 stated makes perfect sense.

jheathco commented Jul 10, 2017

@GratefulTony once the first block of > 1mb is created the blockchain will continue as normal. What @JaredR26 stated makes perfect sense.

@betawaffle

This comment has been minimized.

Show comment
Hide comment
@betawaffle

betawaffle Jul 10, 2017

Why the delay in testing?

betawaffle commented Jul 10, 2017

Why the delay in testing?

@GratefulTony

This comment has been minimized.

Show comment
Hide comment
@GratefulTony

GratefulTony Jul 10, 2017

@jheathco

Adversarial conditions on mainnet will be entirely different when real money is on the line-- these kids are pulling your feathers for lulz--Don't expect reality to be any kinder than a test tube, even if this particular failure mode is unlikely.

My question is: what is the "Plan B" for testing this code?

GratefulTony commented Jul 10, 2017

@jheathco

Adversarial conditions on mainnet will be entirely different when real money is on the line-- these kids are pulling your feathers for lulz--Don't expect reality to be any kinder than a test tube, even if this particular failure mode is unlikely.

My question is: what is the "Plan B" for testing this code?

@JaredR26

This comment has been minimized.

Show comment
Hide comment
@JaredR26

JaredR26 Jul 10, 2017

@betawaffle

Why the delay in testing?

I'm not sure, as I'm not privy to the testing plan (and agree that that part should be more public and clear).

If you and @GratefulTony disagree that this scenario is not a feasible adversarial condition for mainnet, or disagree with my speculation on the cause of the issue, can you clarify why?

If instead either of you agree with my response that testing for adversarial conditions that cannot possibly occur on mainnet is not necessary, can you state so? There's so much anger and deceit in this debate - from every side - that a lack of clarity will only contribute to more anger and deceit in the future from at least one side of the debate. Fair?

JaredR26 commented Jul 10, 2017

@betawaffle

Why the delay in testing?

I'm not sure, as I'm not privy to the testing plan (and agree that that part should be more public and clear).

If you and @GratefulTony disagree that this scenario is not a feasible adversarial condition for mainnet, or disagree with my speculation on the cause of the issue, can you clarify why?

If instead either of you agree with my response that testing for adversarial conditions that cannot possibly occur on mainnet is not necessary, can you state so? There's so much anger and deceit in this debate - from every side - that a lack of clarity will only contribute to more anger and deceit in the future from at least one side of the debate. Fair?

@GratefulTony

This comment has been minimized.

Show comment
Hide comment
@GratefulTony

GratefulTony Jul 10, 2017

@JaredR26

I'm trying to be as productive as possible here: what's plan B to test the code if testnet doesn't play nice? Get more real miners to mine testnet?

GratefulTony commented Jul 10, 2017

@JaredR26

I'm trying to be as productive as possible here: what's plan B to test the code if testnet doesn't play nice? Get more real miners to mine testnet?

@betawaffle

This comment has been minimized.

Show comment
Hide comment
@betawaffle

betawaffle Jul 10, 2017

@JaredR26 I agree that's not an adversarial scenario for mainnet. The concern I have is this testing plan which AFAICT, either has no transactions, or hasn't started yet (which doesn't make much sense).

betawaffle commented Jul 10, 2017

@JaredR26 I agree that's not an adversarial scenario for mainnet. The concern I have is this testing plan which AFAICT, either has no transactions, or hasn't started yet (which doesn't make much sense).

@NiKiZe

This comment has been minimized.

Show comment
Hide comment
@NiKiZe

NiKiZe Jul 10, 2017

@GratefulTony No plan B is needed, Just one >1MB block once there is enough transactions to fill it. Worst case scenario on mainnet is that it will work as expected!

NiKiZe commented Jul 10, 2017

@GratefulTony No plan B is needed, Just one >1MB block once there is enough transactions to fill it. Worst case scenario on mainnet is that it will work as expected!

@NiKiZe

This comment has been minimized.

Show comment
Hide comment
@NiKiZe

NiKiZe Jul 10, 2017

One could only conclude that the test so far is a success, S2x refuses blocks that is not >1MB when they need to be. Not enough transactions is an non issue on mainnet. The outcome will only be clear after the needed size of transactions has been reached.

NiKiZe commented Jul 10, 2017

One could only conclude that the test so far is a success, S2x refuses blocks that is not >1MB when they need to be. Not enough transactions is an non issue on mainnet. The outcome will only be clear after the needed size of transactions has been reached.

@JaredR26

This comment has been minimized.

Show comment
Hide comment
@JaredR26

JaredR26 Jul 10, 2017

My question is: what is the "Plan B" for testing this code?

I suspect the test will need to be restarted as the devs won't be able to see the effects of the split in real-time anymore. To prevent another attack they may need to plan a different testnet starting over with better parameters to get it done quickly / control when it activates. Unfortunately that testnet may need to be done in secret (Thanks anonymous attacker, you've done a great service here!).

I'm trying to be as productive as possible here: what's plan B to test the code if testnet doesn't play nice? Get more real miners to mine testnet?

You still didn't either agree with what I said, or state what part you disagreed with... Can you do that?

I don't think that adding miners to testnet is the solution, getting into a testnet arms race with anonymous attackers wouldn't be productive. I'm open to suggestions or maybe they will weigh in(I'm not privy to the actual plans), but the only ideas that come to mind initially would be tighter control over the activation process in testnet code and restarting, or a secret new testnet. Neither are great solutions.

@JaredR26 I agree that's not an adversarial scenario for mainnet. The concern I have is this testing plan which AFAICT, either has no transactions, or hasn't started yet (which doesn't make much sense).

Thank you for stating so. I agree that the lack of clarity around the testing plan isn't ideal. Unfortunately it may become more secret now to prevent a repeat of this from interfering with observations... Making it more open and still thorough within the time limit might be a harder path, sadly.

JaredR26 commented Jul 10, 2017

My question is: what is the "Plan B" for testing this code?

I suspect the test will need to be restarted as the devs won't be able to see the effects of the split in real-time anymore. To prevent another attack they may need to plan a different testnet starting over with better parameters to get it done quickly / control when it activates. Unfortunately that testnet may need to be done in secret (Thanks anonymous attacker, you've done a great service here!).

I'm trying to be as productive as possible here: what's plan B to test the code if testnet doesn't play nice? Get more real miners to mine testnet?

You still didn't either agree with what I said, or state what part you disagreed with... Can you do that?

I don't think that adding miners to testnet is the solution, getting into a testnet arms race with anonymous attackers wouldn't be productive. I'm open to suggestions or maybe they will weigh in(I'm not privy to the actual plans), but the only ideas that come to mind initially would be tighter control over the activation process in testnet code and restarting, or a secret new testnet. Neither are great solutions.

@JaredR26 I agree that's not an adversarial scenario for mainnet. The concern I have is this testing plan which AFAICT, either has no transactions, or hasn't started yet (which doesn't make much sense).

Thank you for stating so. I agree that the lack of clarity around the testing plan isn't ideal. Unfortunately it may become more secret now to prevent a repeat of this from interfering with observations... Making it more open and still thorough within the time limit might be a harder path, sadly.

@betawaffle

This comment has been minimized.

Show comment
Hide comment
@betawaffle

betawaffle Jul 10, 2017

@JaredR26 You mean a private testnet? That will boost confidence, for sure.

betawaffle commented Jul 10, 2017

@JaredR26 You mean a private testnet? That will boost confidence, for sure.

@JaredR26

This comment has been minimized.

Show comment
Hide comment
@JaredR26

JaredR26 Jul 10, 2017

@GratefulTony No plan B is needed, Just one >1MB block once there is enough transactions to fill it. Worst case scenario on mainnet is that it will work as expected!

I'm not sure that this is sufficient testing. I'd much rather be watching / debugging multiple nodes/screens at the moment of hardfork to see what happens, when, and to whom.

One could only conclude that the test so far is a success, S2x refuses blocks that is not >1MB when they need to be.

That's one test and one potential failure mechanism. I doubt it was intended, though, and it is quite far from a thorough test of the variety of outcomes.

JaredR26 commented Jul 10, 2017

@GratefulTony No plan B is needed, Just one >1MB block once there is enough transactions to fill it. Worst case scenario on mainnet is that it will work as expected!

I'm not sure that this is sufficient testing. I'd much rather be watching / debugging multiple nodes/screens at the moment of hardfork to see what happens, when, and to whom.

One could only conclude that the test so far is a success, S2x refuses blocks that is not >1MB when they need to be.

That's one test and one potential failure mechanism. I doubt it was intended, though, and it is quite far from a thorough test of the variety of outcomes.

@betawaffle

This comment has been minimized.

Show comment
Hide comment
@betawaffle

betawaffle Jul 10, 2017

As a bitcoin user, "trust us, we tested it" isn't going to fly.

betawaffle commented Jul 10, 2017

As a bitcoin user, "trust us, we tested it" isn't going to fly.

@JaredR26

This comment has been minimized.

Show comment
Hide comment
@JaredR26

JaredR26 Jul 10, 2017

@JaredR26 You mean a private testnet? That will boost confidence, for sure.

Agreed, not ideal. If we have better suggestions we should list them to help the plan. Maybe they could whitelist specific miners under their control on testnet? Wouldn't help though if the whitelist was public as the attacker could mine to the whitelisted targets.

As a bitcoin user, "trust us, we tested it" isn't going to fly.

Based on your statements, I don't think "segwit2x" would fly with you either, so that statement doesn't really mean anything. But I agree that openness would be better if at all possible, but openness would probably be worse than either missing deadlines significantly or poor testing if those were the only options.

JaredR26 commented Jul 10, 2017

@JaredR26 You mean a private testnet? That will boost confidence, for sure.

Agreed, not ideal. If we have better suggestions we should list them to help the plan. Maybe they could whitelist specific miners under their control on testnet? Wouldn't help though if the whitelist was public as the attacker could mine to the whitelisted targets.

As a bitcoin user, "trust us, we tested it" isn't going to fly.

Based on your statements, I don't think "segwit2x" would fly with you either, so that statement doesn't really mean anything. But I agree that openness would be better if at all possible, but openness would probably be worse than either missing deadlines significantly or poor testing if those were the only options.

@jgarzik

This comment has been minimized.

Show comment
Hide comment
@jgarzik

jgarzik Jul 10, 2017

To summarize the scenario, the expected behavior of the HF trigger point was activated. The chain rules require a >1M block at this point.

It is normal for test networks to have minimal mining power. This implies that anyone can connect a miner to the Bitcoin Core testnet or the segwit2x testnet and disrupt the chain. This is a known attribute of test networks, which does not occur on mainnet.

In this case, someone accelerated a test plan scheduled far in the future to trigger immediately. A very low cost annoyance attack, in sum.

It was expected when the segwit2x effort began that folks would try to disrupt segwit2x testing, and make it harder to test the software. This falls within the realm of expected behaviors.

jgarzik commented Jul 10, 2017

To summarize the scenario, the expected behavior of the HF trigger point was activated. The chain rules require a >1M block at this point.

It is normal for test networks to have minimal mining power. This implies that anyone can connect a miner to the Bitcoin Core testnet or the segwit2x testnet and disrupt the chain. This is a known attribute of test networks, which does not occur on mainnet.

In this case, someone accelerated a test plan scheduled far in the future to trigger immediately. A very low cost annoyance attack, in sum.

It was expected when the segwit2x effort began that folks would try to disrupt segwit2x testing, and make it harder to test the software. This falls within the realm of expected behaviors.

@mkwia

This comment has been minimized.

Show comment
Hide comment
@mkwia

mkwia Jul 10, 2017

@jgarzik do you intend to use private testnets?

mkwia commented Jul 10, 2017

@jgarzik do you intend to use private testnets?

@betawaffle

This comment has been minimized.

Show comment
Hide comment
@betawaffle

betawaffle Jul 10, 2017

I agree, this matches exactly what I would expect this code to do. The only problem I see is that nobody is creating the necessary transactions. Why has nobody attempted to use the testnet for so long?

betawaffle commented Jul 10, 2017

I agree, this matches exactly what I would expect this code to do. The only problem I see is that nobody is creating the necessary transactions. Why has nobody attempted to use the testnet for so long?

@JaredR26

This comment has been minimized.

Show comment
Hide comment
@JaredR26

JaredR26 Jul 10, 2017

In this case, someone accelerated a test plan scheduled far in the future to trigger immediately. A very low cost annoyance attack, in sum.

Is the test going to be repeated as intended, or is that portion of it considered complete?

JaredR26 commented Jul 10, 2017

In this case, someone accelerated a test plan scheduled far in the future to trigger immediately. A very low cost annoyance attack, in sum.

Is the test going to be repeated as intended, or is that portion of it considered complete?

@christophebiocca

This comment has been minimized.

Show comment
Hide comment
@christophebiocca

christophebiocca Jul 10, 2017

#51 blocks actual testing, AFAICT.

christophebiocca commented Jul 10, 2017

#51 blocks actual testing, AFAICT.

@JaredR26

This comment has been minimized.

Show comment
Hide comment
@JaredR26

JaredR26 Jul 10, 2017

Why has nobody attempted to use the testnet for so long?

It's testnet, they'd have to create 2000+ transactions(or equivalent) in between every single block. Since the test wasn't planned anytime soon apparently, I doubt anyone would have seen a need to do that. The hardfork itself is farther out and probably second priority to the segwit activation testing.

JaredR26 commented Jul 10, 2017

Why has nobody attempted to use the testnet for so long?

It's testnet, they'd have to create 2000+ transactions(or equivalent) in between every single block. Since the test wasn't planned anytime soon apparently, I doubt anyone would have seen a need to do that. The hardfork itself is farther out and probably second priority to the segwit activation testing.

@justvanbloom

This comment has been minimized.

Show comment
Hide comment
@justvanbloom

justvanbloom Jul 10, 2017

So this nontest testnet Szenario is a fault by Setup? Impossible on mainnet to have blocks below 1mb?

justvanbloom commented Jul 10, 2017

So this nontest testnet Szenario is a fault by Setup? Impossible on mainnet to have blocks below 1mb?

@betawaffle

This comment has been minimized.

Show comment
Hide comment
@betawaffle

betawaffle Jul 10, 2017

they'd have to create 2000+ transactions(or equivalent) in between every single block.

They only have to do that once right now.

betawaffle commented Jul 10, 2017

they'd have to create 2000+ transactions(or equivalent) in between every single block.

They only have to do that once right now.

@JaredR26

This comment has been minimized.

Show comment
Hide comment
@JaredR26

JaredR26 Jul 10, 2017

So this nontest testnet Szenario is a fault by Setup? Impossible on mainnet to have blocks below 1mb?

Only one block is required to be >1mb. What's impossible about this situation is having a nearly-empty mempool for hours on end. We're lucky if the mempool is nearly empty for even one block today, much less hours.

they'd have to create 2000+ transactions(or equivalent) in between every single block.

They only have to do that once right now.

Considering Jeff first responded 12 minutes ago and there are several devs involved... Give them time. I doubt Jeff can even answer my/our questions about the plan right now before he talks to the other individuals involved. Coordinating people and making decisions takes time, give him some time.

JaredR26 commented Jul 10, 2017

So this nontest testnet Szenario is a fault by Setup? Impossible on mainnet to have blocks below 1mb?

Only one block is required to be >1mb. What's impossible about this situation is having a nearly-empty mempool for hours on end. We're lucky if the mempool is nearly empty for even one block today, much less hours.

they'd have to create 2000+ transactions(or equivalent) in between every single block.

They only have to do that once right now.

Considering Jeff first responded 12 minutes ago and there are several devs involved... Give them time. I doubt Jeff can even answer my/our questions about the plan right now before he talks to the other individuals involved. Coordinating people and making decisions takes time, give him some time.

@jgarzik

This comment has been minimized.

Show comment
Hide comment
@jgarzik

jgarzik Jul 10, 2017

To @ all re test plan, as @christophebiocca hints, there are changes to the test implementation needed for further activation scenario testing (ref #51 et al). This would necessitate updating all test clients with incompatible changes. In short, test network disruption was expected, even in absence of help from jokers on the Internet.

@mkwia: Several of us already use private testnets to make first-pass testing before pushing PRs to github for review.

However, a public testnet is very important. We will use that unless jerks on the Internet make it impossible to use a public testnet.

DDoS'ing of alternate implementations is a common technique in the Bitcoin space, sadly.

@justvanbloom: See "Hard Fork on Block X", #29

jgarzik commented Jul 10, 2017

To @ all re test plan, as @christophebiocca hints, there are changes to the test implementation needed for further activation scenario testing (ref #51 et al). This would necessitate updating all test clients with incompatible changes. In short, test network disruption was expected, even in absence of help from jokers on the Internet.

@mkwia: Several of us already use private testnets to make first-pass testing before pushing PRs to github for review.

However, a public testnet is very important. We will use that unless jerks on the Internet make it impossible to use a public testnet.

DDoS'ing of alternate implementations is a common technique in the Bitcoin space, sadly.

@justvanbloom: See "Hard Fork on Block X", #29

@snavsenv

This comment has been minimized.

Show comment
Hide comment
@snavsenv

snavsenv Jul 10, 2017

@tyler-smith Okay. Thank you. Last question. At what point would btc1 nodes isolate themselves from the network? Will this be before or after enough transactions has been recieved to craft the anti replay block? Oh and last but not least, wouldnt most people and buisnesses stop transacting leading up to a hardfork?

snavsenv commented Jul 10, 2017

@tyler-smith Okay. Thank you. Last question. At what point would btc1 nodes isolate themselves from the network? Will this be before or after enough transactions has been recieved to craft the anti replay block? Oh and last but not least, wouldnt most people and buisnesses stop transacting leading up to a hardfork?

@JaredR26

This comment has been minimized.

Show comment
Hide comment
@JaredR26

JaredR26 Jul 10, 2017

The real issue here is that BTC1 is presuming there will be enough tx in the mempool to create a >1MB block at fork time, or else pause the network until there are enough tx. If the fork won't work with a non-full-mempool, then the fork code is broken IMO.

Can you please check for me the percentage of the time that Bitcoin had less than 1mb in the mempool over the last month?

And after that, can you please check for me and see when the last time Bitcoin got less than 1mb of transactions added to the mempool over the course of an hour? Three years back maybe? One hour is nothing, we get block gaps longer than that all the time due to variance.

Mempool concerns are a nonissue. It isn't 2011 anymore. There are valid advantages to the hardfork bit, and there are valid disadvantages to it. It was discussed and not done that way to maintain compatibility with SPV wallets.

JaredR26 commented Jul 10, 2017

The real issue here is that BTC1 is presuming there will be enough tx in the mempool to create a >1MB block at fork time, or else pause the network until there are enough tx. If the fork won't work with a non-full-mempool, then the fork code is broken IMO.

Can you please check for me the percentage of the time that Bitcoin had less than 1mb in the mempool over the last month?

And after that, can you please check for me and see when the last time Bitcoin got less than 1mb of transactions added to the mempool over the course of an hour? Three years back maybe? One hour is nothing, we get block gaps longer than that all the time due to variance.

Mempool concerns are a nonissue. It isn't 2011 anymore. There are valid advantages to the hardfork bit, and there are valid disadvantages to it. It was discussed and not done that way to maintain compatibility with SPV wallets.

@jheathco

This comment has been minimized.

Show comment
Hide comment
@jheathco

jheathco Jul 10, 2017

@phzi here is a two-year chart of past block sizes: https://blockchain.info/charts/avg-block-size?timespan=2years

You are correct that it is an unprovable assumption that nothing changes - all assumptions are unprovable. However, in the worst case scenario that block sizes continue to decrease (even past those seen 2 years ago), it will simply result in a somewhat longer period before the first block after the hard fork.

jheathco commented Jul 10, 2017

@phzi here is a two-year chart of past block sizes: https://blockchain.info/charts/avg-block-size?timespan=2years

You are correct that it is an unprovable assumption that nothing changes - all assumptions are unprovable. However, in the worst case scenario that block sizes continue to decrease (even past those seen 2 years ago), it will simply result in a somewhat longer period before the first block after the hard fork.

@JaredR26

This comment has been minimized.

Show comment
Hide comment
@JaredR26

JaredR26 Jul 10, 2017

Oh and last but not least, wouldnt everyone stop transacting leading up to a hardfork?

In theory this might be a concern, but without replay protection it won't wind up mattering. Transactions will go onto both forks (though the legacy fork may not have any blocks for them, but that's another issue entirely). If nothing else, people like the gambling operator who need to periodically consolidate small UTXO's would fill the gap to take advantage of low fees; they currently do this on weekends.

JaredR26 commented Jul 10, 2017

Oh and last but not least, wouldnt everyone stop transacting leading up to a hardfork?

In theory this might be a concern, but without replay protection it won't wind up mattering. Transactions will go onto both forks (though the legacy fork may not have any blocks for them, but that's another issue entirely). If nothing else, people like the gambling operator who need to periodically consolidate small UTXO's would fill the gap to take advantage of low fees; they currently do this on weekends.

@pekatete

This comment has been minimized.

Show comment
Hide comment
@pekatete

pekatete Jul 10, 2017

@snavsenv - The issue of mainnet having to wait for mempool to fill up is not going to happen as there is ALREADY a >1MB bounty transaction that can be used.

pekatete commented Jul 10, 2017

@snavsenv - The issue of mainnet having to wait for mempool to fill up is not going to happen as there is ALREADY a >1MB bounty transaction that can be used.

@floreslorca

This comment has been minimized.

Show comment
Hide comment
@floreslorca

floreslorca Jul 11, 2017

@JaredR26 what is the justification for the >1MB again? seems risky since its not even widely tested. I get your assumptions, but as @snavsenv pointed out; Are you expecting gambler operators to keep the chain going after the HF? Theres already a lot of misinformation out there, and relying on what people might or might not do for the health of the chain seems careless.

floreslorca commented Jul 11, 2017

@JaredR26 what is the justification for the >1MB again? seems risky since its not even widely tested. I get your assumptions, but as @snavsenv pointed out; Are you expecting gambler operators to keep the chain going after the HF? Theres already a lot of misinformation out there, and relying on what people might or might not do for the health of the chain seems careless.

@rcawston

This comment has been minimized.

Show comment
Hide comment
@rcawston

rcawston Jul 11, 2017

I'm reading "this happened in the past, so it will happen in the future" a lot here... it's downright wrong to make that assumption.

@jheathco still not testable or provable... therefore dangerous. You're presuming something and relying on it rather then testing and proving...

There are 1MB+ tx, yes, but there's nothing preventing their inputs being spent before they can be used. The assumption that they will be available has already been shot-down as dangerous.

rcawston commented Jul 11, 2017

I'm reading "this happened in the past, so it will happen in the future" a lot here... it's downright wrong to make that assumption.

@jheathco still not testable or provable... therefore dangerous. You're presuming something and relying on it rather then testing and proving...

There are 1MB+ tx, yes, but there's nothing preventing their inputs being spent before they can be used. The assumption that they will be available has already been shot-down as dangerous.

@pekatete

This comment has been minimized.

Show comment
Hide comment
@pekatete

pekatete Jul 11, 2017

@floreslorca The justification for the >1MB is wipeout protection, and as pointed out above, there'll be no delay at all submitting a block when 2x activates as there is at least one >1MB bounty tx on mainnet.

pekatete commented Jul 11, 2017

@floreslorca The justification for the >1MB is wipeout protection, and as pointed out above, there'll be no delay at all submitting a block when 2x activates as there is at least one >1MB bounty tx on mainnet.

@opetruzel

This comment has been minimized.

Show comment
Hide comment
@opetruzel

opetruzel Jul 11, 2017

@pekatete
The current fix for the quadratic hashing problem limits max individual tx size to 1MB, so I don't think a bounty transaction larger than that would work. How could it?

opetruzel commented Jul 11, 2017

@pekatete
The current fix for the quadratic hashing problem limits max individual tx size to 1MB, so I don't think a bounty transaction larger than that would work. How could it?

@JaredR26

This comment has been minimized.

Show comment
Hide comment
@JaredR26

JaredR26 Jul 11, 2017

@floreslorca

what is the justification for the >1MB again? seems risky since its not even widely tested. I get your assumptions

See Jeff's post 11 days ago in #29: #29 (comment)

A hardfork bit would require SPV nodes to update whereas a >1 mb block would not. A >1mb block is strictly the distinction between the legacy client and s2x client's, so it logically works as a requirement.

TBH, there's advantages and disadvantages to the hardfork bit approach. The attacks from Peter Todd and other core devs regarding it are not based on sound technical reasoning, they're based on a desire to minimize compatibility between s2x and core, and to enable SPV nodes to distinguish between the two chains. Or in other words, its based on the assumption that the legacy chain will definitely survive.
Maintaining or breaking that compatibility has advantages and disadvantages depending on how the fork plays out.

If the legacy chain starves from lack of blocks and clients switch / PoW changes are attempted, the hardfork bit would have been the wrong choice because of the added work for SPV clients. If instead the legacy chain survives and we have two competing Bitcoins, the hardfork bit would have been the correct choice and may wind up as necessary as well as replay protection. Core could also add a hardfork bit to split SPV nodes from the s2x chain, which may be the least of their concerns if they attempt a PoW change on the block-starved legacy chain.

Anyone claiming that the hardfork bit is absolutely the best choice all around is not being honest, poorly informed, or else they are a time traveller who decided that getting into this debate would be fun(??). Without knowing how the fork will go down, no one can say for sure which approach is "better."

JaredR26 commented Jul 11, 2017

@floreslorca

what is the justification for the >1MB again? seems risky since its not even widely tested. I get your assumptions

See Jeff's post 11 days ago in #29: #29 (comment)

A hardfork bit would require SPV nodes to update whereas a >1 mb block would not. A >1mb block is strictly the distinction between the legacy client and s2x client's, so it logically works as a requirement.

TBH, there's advantages and disadvantages to the hardfork bit approach. The attacks from Peter Todd and other core devs regarding it are not based on sound technical reasoning, they're based on a desire to minimize compatibility between s2x and core, and to enable SPV nodes to distinguish between the two chains. Or in other words, its based on the assumption that the legacy chain will definitely survive.
Maintaining or breaking that compatibility has advantages and disadvantages depending on how the fork plays out.

If the legacy chain starves from lack of blocks and clients switch / PoW changes are attempted, the hardfork bit would have been the wrong choice because of the added work for SPV clients. If instead the legacy chain survives and we have two competing Bitcoins, the hardfork bit would have been the correct choice and may wind up as necessary as well as replay protection. Core could also add a hardfork bit to split SPV nodes from the s2x chain, which may be the least of their concerns if they attempt a PoW change on the block-starved legacy chain.

Anyone claiming that the hardfork bit is absolutely the best choice all around is not being honest, poorly informed, or else they are a time traveller who decided that getting into this debate would be fun(??). Without knowing how the fork will go down, no one can say for sure which approach is "better."

@NiKiZe

This comment has been minimized.

Show comment
Hide comment
@NiKiZe

NiKiZe Jul 11, 2017

@opetruzel it just needs to be 1 000 000 bytes which is the limit, adding block headers and coinbase and the block is > 1 000 000 bytes (that transaction already exists)

mempool will have the needed transactions, and if not it is easy for any miner to create them on their own (on mainnet)

NiKiZe commented Jul 11, 2017

@opetruzel it just needs to be 1 000 000 bytes which is the limit, adding block headers and coinbase and the block is > 1 000 000 bytes (that transaction already exists)

mempool will have the needed transactions, and if not it is easy for any miner to create them on their own (on mainnet)

@snavsenv

This comment has been minimized.

Show comment
Hide comment
@snavsenv

snavsenv Jul 11, 2017

Im sorry but i cannot participate any more with this project. I have come to the conclusion that the people involved do not devote enough time to consider the possible consequences of their actions to minimise risk.

This replay protection method is showing this. If testnet had not had this issue would there have been any debate? And what other areas of the code have been overlooked causing problems waiting to surface? Have a nice day, and good luck with the project. Here is hoping that you wont screw completely up. Good luck to whoever associates their name with this project.

snavsenv commented Jul 11, 2017

Im sorry but i cannot participate any more with this project. I have come to the conclusion that the people involved do not devote enough time to consider the possible consequences of their actions to minimise risk.

This replay protection method is showing this. If testnet had not had this issue would there have been any debate? And what other areas of the code have been overlooked causing problems waiting to surface? Have a nice day, and good luck with the project. Here is hoping that you wont screw completely up. Good luck to whoever associates their name with this project.

@JaredR26

This comment has been minimized.

Show comment
Hide comment
@JaredR26

JaredR26 Jul 11, 2017

The assumption that they will be available has already been shot-down as dangerous.

You find me the last time that Bitcoin couldn't come up with 1mb of transactions within 2 hours if the chain halted(mempool + transactions added). If you can find a time in the last 3 years where that happened, I'll change my opinion and support what you're saying. Short of that, its a ridiculous assertion to claim that transactions will just stop on Bitcoin suddenly.

JaredR26 commented Jul 11, 2017

The assumption that they will be available has already been shot-down as dangerous.

You find me the last time that Bitcoin couldn't come up with 1mb of transactions within 2 hours if the chain halted(mempool + transactions added). If you can find a time in the last 3 years where that happened, I'll change my opinion and support what you're saying. Short of that, its a ridiculous assertion to claim that transactions will just stop on Bitcoin suddenly.

@pekatete

This comment has been minimized.

Show comment
Hide comment
@pekatete

pekatete Jul 11, 2017

@snavsenv

If testnet had not had this issue would there have been any debate?

That is what testnet is for, testing. In any case, as has been said severally, code worked as intended in an adversarial environment.

pekatete commented Jul 11, 2017

@snavsenv

If testnet had not had this issue would there have been any debate?

That is what testnet is for, testing. In any case, as has been said severally, code worked as intended in an adversarial environment.

@jrallison

This comment has been minimized.

Show comment
Hide comment
@jrallison

jrallison Jul 11, 2017

You all are missing the point.

Moreover, btc1 by default will not make a block >1MB even if there are transactions in the mempool for it. You guys are just making your code all the more frighting by vigorously denying the issues.

That's the real issue at hand with btc1 atm. Will the currently released btc1 code deployed on testnet5 actually ever build a > 1MB block and make progress?

jrallison commented Jul 11, 2017

You all are missing the point.

Moreover, btc1 by default will not make a block >1MB even if there are transactions in the mempool for it. You guys are just making your code all the more frighting by vigorously denying the issues.

That's the real issue at hand with btc1 atm. Will the currently released btc1 code deployed on testnet5 actually ever build a > 1MB block and make progress?

@snavsenv

This comment has been minimized.

Show comment
Hide comment
@snavsenv

snavsenv Jul 11, 2017

Last post.

@pekatete do not expect me to believe devs were hoping someone would mine 6k blocks and trigger the hardfork bit. In fact someone was arguing they didnt. I think this supports my view that devs of the project do not devote an adequate amount of time and overlooks things as a result making the project too ambitious for its own and bitcoins good. Have a nice day.

snavsenv commented Jul 11, 2017

Last post.

@pekatete do not expect me to believe devs were hoping someone would mine 6k blocks and trigger the hardfork bit. In fact someone was arguing they didnt. I think this supports my view that devs of the project do not devote an adequate amount of time and overlooks things as a result making the project too ambitious for its own and bitcoins good. Have a nice day.

@rcawston

This comment has been minimized.

Show comment
Hide comment
@rcawston

rcawston Jul 11, 2017

@JaredR26 you just once again argued that past events are proof of future events. If you insist on perpetuating this flawed logic, then it's unlikely you have any desire to see the flaw in blindly expecting there is a mempool backlog some random time in the future or that enough people will risk their coins by transacting within the days of the hard-fork.

The only ridiculous asseration is from you by ignoring a real issue.

There is no good reason for testnet5 sw2x miners to be stuck at the hard-fork point... it is something that would be been trivial to prevent in the implementation, and at least 1 method has already been given that doesn't involve making tx just for the purpose.

rcawston commented Jul 11, 2017

@JaredR26 you just once again argued that past events are proof of future events. If you insist on perpetuating this flawed logic, then it's unlikely you have any desire to see the flaw in blindly expecting there is a mempool backlog some random time in the future or that enough people will risk their coins by transacting within the days of the hard-fork.

The only ridiculous asseration is from you by ignoring a real issue.

There is no good reason for testnet5 sw2x miners to be stuck at the hard-fork point... it is something that would be been trivial to prevent in the implementation, and at least 1 method has already been given that doesn't involve making tx just for the purpose.

@jrallison

This comment has been minimized.

Show comment
Hide comment
@jrallison

jrallison Jul 11, 2017

That's the real issue at hand with btc1 atm. Will the currently released btc1 code deployed on testnet5 actually ever build a > 1MB block and make progress?

Following up here. 2mb fork seems stuck at block 27070 [1], and the 1mb fork has progressed to block 34931 [2].

That means there have been 7861 blocks where the 2mb hasn't been able to make any progress.

The minimum sized transaction during this period seems to be 0.29 kb [2]. Meaning that the 1mb fork has processed at least 2.2 MB of transactions since the 2mb fork became stuck.

It seems the 2mb fork is stuck until code changes are made? (or my math is wrong?)

  1. http://btcfaucet.ix28uktqsp.us-west-2.elasticbeanstalk.com/
  2. https://testnet5.blockchain.info/home

jrallison commented Jul 11, 2017

That's the real issue at hand with btc1 atm. Will the currently released btc1 code deployed on testnet5 actually ever build a > 1MB block and make progress?

Following up here. 2mb fork seems stuck at block 27070 [1], and the 1mb fork has progressed to block 34931 [2].

That means there have been 7861 blocks where the 2mb hasn't been able to make any progress.

The minimum sized transaction during this period seems to be 0.29 kb [2]. Meaning that the 1mb fork has processed at least 2.2 MB of transactions since the 2mb fork became stuck.

It seems the 2mb fork is stuck until code changes are made? (or my math is wrong?)

  1. http://btcfaucet.ix28uktqsp.us-west-2.elasticbeanstalk.com/
  2. https://testnet5.blockchain.info/home
@christophebiocca

This comment has been minimized.

Show comment
Hide comment
@christophebiocca

christophebiocca Jul 11, 2017

The minimum sized transaction during this period seems to be 0.29 kb. Meaning that the 1mb fork has processed at least 2.2 MB of transactions since the 2mb fork became stuck.

If they're coinbase transactions (or descend from them) they can't be used on the other side of the split so you have to take them out from your total calculations.

I'm testing mining/mempool stuff right now and will report back.

christophebiocca commented Jul 11, 2017

The minimum sized transaction during this period seems to be 0.29 kb. Meaning that the 1mb fork has processed at least 2.2 MB of transactions since the 2mb fork became stuck.

If they're coinbase transactions (or descend from them) they can't be used on the other side of the split so you have to take them out from your total calculations.

I'm testing mining/mempool stuff right now and will report back.

@jrallison

This comment has been minimized.

Show comment
Hide comment
@jrallison

jrallison Jul 11, 2017

If they're coinbase transactions (or descend from them) they can't be used on the other side of the split so you have to take them out from your total calculations.

Thanks, makes sense! Bitcoin newb here.

jrallison commented Jul 11, 2017

If they're coinbase transactions (or descend from them) they can't be used on the other side of the split so you have to take them out from your total calculations.

Thanks, makes sense! Bitcoin newb here.

@dooglus

This comment has been minimized.

Show comment
Hide comment
@dooglus

dooglus Jul 11, 2017

Somebody with testnet5 coins should simply create 5k transactions:

for i in {0..9999}; do bitcoin-cli sendtoaddress $addr 0.001; done

dooglus commented Jul 11, 2017

Somebody with testnet5 coins should simply create 5k transactions:

for i in {0..9999}; do bitcoin-cli sendtoaddress $addr 0.001; done

@friendsofbitcoin

This comment has been minimized.

Show comment
Hide comment
@friendsofbitcoin

friendsofbitcoin Jul 11, 2017

@jrallison Out of curiosity, did you mine this with the default codebase seen here?

https://github.com/btc1/bitcoin/blob/segwit2x/src/policy/policy.h#L17

Thank You.

friendsofbitcoin commented Jul 11, 2017

@jrallison Out of curiosity, did you mine this with the default codebase seen here?

https://github.com/btc1/bitcoin/blob/segwit2x/src/policy/policy.h#L17

Thank You.

@jrallison

This comment has been minimized.

Show comment
Hide comment
@jrallison

jrallison Jul 11, 2017

@jrallison Out of curiosity, did you mine this with the default codebase seen here?

I'm just a concerned bitcoin newb/hodler... I didn't mine it, just noticed the 2mb fork making progress after I commented here.

jrallison commented Jul 11, 2017

@jrallison Out of curiosity, did you mine this with the default codebase seen here?

I'm just a concerned bitcoin newb/hodler... I didn't mine it, just noticed the 2mb fork making progress after I commented here.

@jgarzik

This comment has been minimized.

Show comment
Hide comment
@jgarzik

jgarzik Jul 11, 2017

@friendsofbitcoin That is just a policy setting, which will likely remain untouched for segwit2x release - thus defaulting to smaller blocks absent miner updating their configuration file.

To mine a larger block, miners should opt into that with a setting in bitcoin.conf:

blockmaxweight=8000000

jgarzik commented Jul 11, 2017

@friendsofbitcoin That is just a policy setting, which will likely remain untouched for segwit2x release - thus defaulting to smaller blocks absent miner updating their configuration file.

To mine a larger block, miners should opt into that with a setting in bitcoin.conf:

blockmaxweight=8000000
@jgarzik

This comment has been minimized.

Show comment
Hide comment
@jgarzik

jgarzik Jul 11, 2017

Chain is now un-stuck. Closing issue.

{
  "hash": "000000005a91d978c9527d202fc98acaebaee4f177f0b5d732f741d4c99b7a2d",
  "confirmations": 2401,
  "strippedsize": 1051890,
  "size": 1051890,
  "weight": 4207560,
  "height": 27071,
  "version": 536870930,
  "versionHex": "20000012",
  "merkleroot": "a9eb2319ca2c5ea19cae277b67da72b71913ee78134caa5a2cd3d57e1879ef6f",
  "tx": [
   ...
  ],
  "time": 1499734447,
  "mediantime": 1499635609,
  "nonce": 3338783941,
  "bits": "1d00ffff",
  "difficulty": 1,
  "chainwork": "000000000000000000000000000000000000000000000000017ba3c89e843173",
  "previousblockhash": "0000000035a7b078c8b54e33b496dcbd66f8d52049da3684d80291d1cc13f29a",
  "nextblockhash": "000000001482a422d9eb9cab055b4ce4672577540aa85e0800902e5a0ad2fd89"
}

jgarzik commented Jul 11, 2017

Chain is now un-stuck. Closing issue.

{
  "hash": "000000005a91d978c9527d202fc98acaebaee4f177f0b5d732f741d4c99b7a2d",
  "confirmations": 2401,
  "strippedsize": 1051890,
  "size": 1051890,
  "weight": 4207560,
  "height": 27071,
  "version": 536870930,
  "versionHex": "20000012",
  "merkleroot": "a9eb2319ca2c5ea19cae277b67da72b71913ee78134caa5a2cd3d57e1879ef6f",
  "tx": [
   ...
  ],
  "time": 1499734447,
  "mediantime": 1499635609,
  "nonce": 3338783941,
  "bits": "1d00ffff",
  "difficulty": 1,
  "chainwork": "000000000000000000000000000000000000000000000000017ba3c89e843173",
  "previousblockhash": "0000000035a7b078c8b54e33b496dcbd66f8d52049da3684d80291d1cc13f29a",
  "nextblockhash": "000000001482a422d9eb9cab055b4ce4672577540aa85e0800902e5a0ad2fd89"
}

@jgarzik jgarzik closed this Jul 11, 2017

@jrallison

This comment has been minimized.

Show comment
Hide comment
@jrallison

jrallison Jul 11, 2017

@friendsofbitcoin That is just a policy setting, which will likely remain untouched for segwit2x release - thus defaulting to smaller blocks absent miner updating their configuration file.

To mine a larger block, miners should opt into that with a setting in bitcoin.conf

Thanks for the clarification, @jgarzik !

That brings up one additional question though, can a miner set:

blockmaxweight=8000000

prior to the hard fork block, and still generate valid blocks at <= 4000000 block weight? Or will the mining code always attempt to build invalid 8000000 weight blocks prior to the hard fork block?

Newb bitcoin user, but a professional developer and briefly looking though the code it seems to use the provided blockmaxweight exclusively? But I'm a newb, so... probably wrong.

jrallison commented Jul 11, 2017

@friendsofbitcoin That is just a policy setting, which will likely remain untouched for segwit2x release - thus defaulting to smaller blocks absent miner updating their configuration file.

To mine a larger block, miners should opt into that with a setting in bitcoin.conf

Thanks for the clarification, @jgarzik !

That brings up one additional question though, can a miner set:

blockmaxweight=8000000

prior to the hard fork block, and still generate valid blocks at <= 4000000 block weight? Or will the mining code always attempt to build invalid 8000000 weight blocks prior to the hard fork block?

Newb bitcoin user, but a professional developer and briefly looking though the code it seems to use the provided blockmaxweight exclusively? But I'm a newb, so... probably wrong.

@jgarzik

This comment has been minimized.

Show comment
Hide comment
@jgarzik

jgarzik Jul 11, 2017

@jrallison Any blockmaxweight configuration setting which is too large for the consensus rules is clamped to the largest possible value permitted on the chain at that moment. It's ok to set a too-large value there.

jgarzik commented Jul 11, 2017

@jrallison Any blockmaxweight configuration setting which is too large for the consensus rules is clamped to the largest possible value permitted on the chain at that moment. It's ok to set a too-large value there.

@friendsofbitcoin

This comment has been minimized.

Show comment
Hide comment
@friendsofbitcoin

friendsofbitcoin Jul 11, 2017

@jgarzik Thanks for the clarification. Does this mean there might be a coordination problem due to certain miners using the default setting instead of updating the weight leading to the same problem we see today?

friendsofbitcoin commented Jul 11, 2017

@jgarzik Thanks for the clarification. Does this mean there might be a coordination problem due to certain miners using the default setting instead of updating the weight leading to the same problem we see today?

@opetruzel

This comment has been minimized.

Show comment
Hide comment
@opetruzel

opetruzel Jul 11, 2017

@friendsofbitcoin
Good question! It sounds like "too large values" are ok, as they're constrained by consensus rules in consensus.h. However, I'm not sure the same is true with values that are too small, is it?

Given: DEFAULT_BLOCK_MAX_SIZE = 750000, DEFAULT_BLOCK_MAX_WEIGHT = 3000000, and a SCALE FACTOR of 4, it would be impossible for a miner to mine a block larger than 1MB at the predetermined blockheight for the hardfork... wouldn't it? What .conf settings are necessary to ensure the miner works before, during, and after the hardfork without modifications?

opetruzel commented Jul 11, 2017

@friendsofbitcoin
Good question! It sounds like "too large values" are ok, as they're constrained by consensus rules in consensus.h. However, I'm not sure the same is true with values that are too small, is it?

Given: DEFAULT_BLOCK_MAX_SIZE = 750000, DEFAULT_BLOCK_MAX_WEIGHT = 3000000, and a SCALE FACTOR of 4, it would be impossible for a miner to mine a block larger than 1MB at the predetermined blockheight for the hardfork... wouldn't it? What .conf settings are necessary to ensure the miner works before, during, and after the hardfork without modifications?

@jgarzik

This comment has been minimized.

Show comment
Hide comment
@jgarzik

jgarzik Jul 11, 2017

@opetruzel Set blockmaxweight as above in bitcoin.conf.

jgarzik commented Jul 11, 2017

@opetruzel Set blockmaxweight as above in bitcoin.conf.

@opetruzel

This comment has been minimized.

Show comment
Hide comment
@opetruzel

opetruzel Jul 11, 2017

@jgarzik
That's it? Ok, that's easy enough. Would that also work (or simply be completely ignored) prior to SegWit activation? If so, why not make that the default? It might be a good idea to have defaults that don't prevent miners from being able to mine the HF block.

opetruzel commented Jul 11, 2017

@jgarzik
That's it? Ok, that's easy enough. Would that also work (or simply be completely ignored) prior to SegWit activation? If so, why not make that the default? It might be a good idea to have defaults that don't prevent miners from being able to mine the HF block.

@eumartinez20

This comment has been minimized.

Show comment
Hide comment
@eumartinez20

eumartinez20 Jul 11, 2017

Hi guys,

I don´t see that block in the testnet5 blockchain explorer. This 0.3Kb block shows under 27071:

https://testnet5.blockchain.info/block/0000000016b2af58fc30fef380a4cec7262858e68aaa0bad37a9c419257d0636

I am assuming correctly and testnet5 did not fork?

eumartinez20 commented Jul 11, 2017

Hi guys,

I don´t see that block in the testnet5 blockchain explorer. This 0.3Kb block shows under 27071:

https://testnet5.blockchain.info/block/0000000016b2af58fc30fef380a4cec7262858e68aaa0bad37a9c419257d0636

I am assuming correctly and testnet5 did not fork?

@jrallison

This comment has been minimized.

Show comment
Hide comment
@jrallison

jrallison Jul 11, 2017

I don´t see that block in the testnet5 blockchain explorer. This 0.3Kb block shows under 27071:

testnet5.blockchain.info followed the 1MB (legacy) fork and progressed along throughout this issue for the 2MB fork (that domain is referenced above quite frequently as such).

I am assuming correctly and testnet5 did not fork?

👍

jrallison commented Jul 11, 2017

I don´t see that block in the testnet5 blockchain explorer. This 0.3Kb block shows under 27071:

testnet5.blockchain.info followed the 1MB (legacy) fork and progressed along throughout this issue for the 2MB fork (that domain is referenced above quite frequently as such).

I am assuming correctly and testnet5 did not fork?

👍

@lacksfish

This comment has been minimized.

Show comment
Hide comment
@lacksfish

lacksfish Jul 11, 2017

testnet5.blockchain.info is syncing the right chain now.

lacksfish commented Jul 11, 2017

testnet5.blockchain.info is syncing the right chain now.

@Cr4shOv3rrid3

This comment has been minimized.

Show comment
Hide comment
@Cr4shOv3rrid3

Cr4shOv3rrid3 Jul 13, 2017

nice my doings showing some results lately.

i was directing money from outside into the testnet. you guys do not know that mainnets in reality are the testnet environment. its coded that way.

this is on purpose for safety reasons in relation to dumb developer guys.
it keeps peoples doings outside and only let guys having access to the real chain when time is ready to it.
took me a while to figure that out but now things are running.

next will be getting better synchronisation to ropsten testnet from ethereum.

to getting better interaction for mycelium2toshi and vice versa.

i could also need some guys helping me with the customization for the apps.

thx in advance ;)

Cr4shOv3rrid3 commented Jul 13, 2017

nice my doings showing some results lately.

i was directing money from outside into the testnet. you guys do not know that mainnets in reality are the testnet environment. its coded that way.

this is on purpose for safety reasons in relation to dumb developer guys.
it keeps peoples doings outside and only let guys having access to the real chain when time is ready to it.
took me a while to figure that out but now things are running.

next will be getting better synchronisation to ropsten testnet from ethereum.

to getting better interaction for mycelium2toshi and vice versa.

i could also need some guys helping me with the customization for the apps.

thx in advance ;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment