Policy: Lower default limits for tx chains #6771

Merged
merged 1 commit into from Nov 12, 2015

Conversation

Projects
None yet
@morcos
Member

morcos commented Oct 6, 2015

Reduce the default limits on maximum number of transactions and the cumulative size of those transactions in both ancestor and descendant packages to 25 txs and 100kb total size.

Please see email to bitcoin-dev for background.

I looked at some recent data and 2.6% of transactions in my node's mempool in the first 5 days of October would have been restricted by these new lower limits.

@gmaxwell

This comment has been minimized.

Show comment
Hide comment
@gmaxwell

gmaxwell Oct 6, 2015

Member

Concept ACK.

Member

gmaxwell commented Oct 6, 2015

Concept ACK.

@Mirobit

This comment has been minimized.

Show comment
Hide comment
@Mirobit

Mirobit Oct 6, 2015

Contributor

Are there any other use cases than excessive gambling and spam that require more than 3 unconfirmed ancestors?
This seems like a good opportunity to fight spam without downsides for regular users.

Contributor

Mirobit commented Oct 6, 2015

Are there any other use cases than excessive gambling and spam that require more than 3 unconfirmed ancestors?
This seems like a good opportunity to fight spam without downsides for regular users.

@NanoAkron

This comment has been minimized.

Show comment
Hide comment
@NanoAkron

NanoAkron Oct 7, 2015

So if you're only saving 2.6%, why bother? Why not increase the max block size to 1.026Mb instead?

@Mirobit Who are we to decide how bitcoin is used? What do you want to 'fight' next - porn? Donations to unsavoury nations? People who we don't agree with?

So if you're only saving 2.6%, why bother? Why not increase the max block size to 1.026Mb instead?

@Mirobit Who are we to decide how bitcoin is used? What do you want to 'fight' next - porn? Donations to unsavoury nations? People who we don't agree with?

@morcos

This comment has been minimized.

Show comment
Hide comment
@morcos

morcos Oct 7, 2015

Member

@NanoAkron, to be clear, the goal here is not to restrict transactions. I wish the 2.6% number was 0.0%. Nor is the goal to prevent any specific type or kind of transaction, spam or otherwise. The goal is only to prevent particularly constructed chains of transactions which would serve to break critical functionality of the network. Such chains are only likely constructed for the particular purpose of attacking the entire Bitcoin system, however, the limits proposed to prevent these attacks may inadvertently restrict non-attack transactions. I'm just trying to be as transparent as possible by acknowledging that 2.6% of recent transactions would either have to have been delayed until some of their ancestors were included in blocks or constructed from different utxos if these limits had already been in place.

Member

morcos commented Oct 7, 2015

@NanoAkron, to be clear, the goal here is not to restrict transactions. I wish the 2.6% number was 0.0%. Nor is the goal to prevent any specific type or kind of transaction, spam or otherwise. The goal is only to prevent particularly constructed chains of transactions which would serve to break critical functionality of the network. Such chains are only likely constructed for the particular purpose of attacking the entire Bitcoin system, however, the limits proposed to prevent these attacks may inadvertently restrict non-attack transactions. I'm just trying to be as transparent as possible by acknowledging that 2.6% of recent transactions would either have to have been delayed until some of their ancestors were included in blocks or constructed from different utxos if these limits had already been in place.

@petertodd

View changes

src/main.h
/** Default for -limitancestorsize, maximum kilobytes of tx + all in-mempool ancestors */
-static const unsigned int DEFAULT_ANCESTOR_SIZE_LIMIT = 900;
+static const unsigned int DEFAULT_ANCESTOR_SIZE_LIMIT = 100;

This comment has been minimized.

@petertodd

petertodd Oct 7, 2015

Contributor

Note that this means a max size tx - 100KB - can't be respent. This could cause problems, e.g. with CPFP.

@petertodd

petertodd Oct 7, 2015

Contributor

Note that this means a max size tx - 100KB - can't be respent. This could cause problems, e.g. with CPFP.

This comment has been minimized.

@morcos

morcos Oct 7, 2015

Member

@petertodd I was assuming that if people cared about that they could limit their transactions to 99.5KB and still chain a child or two off of it to pay extra fees. I suppose alternatively we could raise the default to 101?

The size limits would only have restricted approx 0.01% of transactions that passed the count limits over the last several months.

@morcos

morcos Oct 7, 2015

Member

@petertodd I was assuming that if people cared about that they could limit their transactions to 99.5KB and still chain a child or two off of it to pay extra fees. I suppose alternatively we could raise the default to 101?

The size limits would only have restricted approx 0.01% of transactions that passed the count limits over the last several months.

This comment has been minimized.

@TheBlueMatt

TheBlueMatt Oct 8, 2015

Contributor

I would agree with @petertodd here, we should set the size limit to something like 101 or 105.

@TheBlueMatt

TheBlueMatt Oct 8, 2015

Contributor

I would agree with @petertodd here, we should set the size limit to something like 101 or 105.

@jgarzik

This comment has been minimized.

Show comment
Hide comment
@jgarzik

jgarzik Oct 7, 2015

Contributor

Example use case - bitcoin business(es) that depend on tx chains: https://twitter.com/taariqlewis/status/651160362629230592

(this is notably not an endorsement of that activity, as you'll gather from reading responses)

Contributor

jgarzik commented Oct 7, 2015

Example use case - bitcoin business(es) that depend on tx chains: https://twitter.com/taariqlewis/status/651160362629230592

(this is notably not an endorsement of that activity, as you'll gather from reading responses)

@matthieu

This comment has been minimized.

Show comment
Hide comment
@matthieu

matthieu Oct 8, 2015

The largest number of unconfirmed transaction on a single address we've seen at BlockCypher was 164. That particular customer had a burst of activity from its end-users during a period of full blocks and long block gaps.Those 164 transactions were unevenly distributed on 3 UTXOs, leading to a ~100 transactions chain. While I can't name them, a fair amount of people here have used this service, so definitely a legitimate use case.

Based on this I'd keep the ancestor package at current limits and reduce the descendant package only by a factory of ~10x instead of ~40x.

At a higher level, while I believe this is a useful metric to have in Core, I'd avoid clamping down too hard on it. I'm worried it's an over-optimization without considering pool side as a whole.

matthieu commented Oct 8, 2015

The largest number of unconfirmed transaction on a single address we've seen at BlockCypher was 164. That particular customer had a burst of activity from its end-users during a period of full blocks and long block gaps.Those 164 transactions were unevenly distributed on 3 UTXOs, leading to a ~100 transactions chain. While I can't name them, a fair amount of people here have used this service, so definitely a legitimate use case.

Based on this I'd keep the ancestor package at current limits and reduce the descendant package only by a factory of ~10x instead of ~40x.

At a higher level, while I believe this is a useful metric to have in Core, I'd avoid clamping down too hard on it. I'm worried it's an over-optimization without considering pool side as a whole.

@TheBlueMatt

This comment has been minimized.

Show comment
Hide comment
@TheBlueMatt

TheBlueMatt Oct 8, 2015

Contributor

You stated that you saw a long transaction chain in production, but no reason was given as to why your customer had to chain transactions together instead of using other utxos? Are you suggesting that this customer did not otherwise have funds available to pay out when they needed to and, thus, had to chain their own transactions?

Calling this a "useful metric" is a bit of an understatement. These limits, together, directly govern the complexity of mempool limiting algorithms and, to a slightly lesser extent, their effectiveness.

On October 7, 2015 7:56:36 PM PDT, Matthieu Riou notifications@github.com wrote:

The largest number of unconfirmed transaction on a single address we've
seen at BlockCypher was 164. That particular customer had a burst of
activity from its end-users during a period of full blocks and long
block gaps.Those 164 transactions were unevenly distributed on 3 UTXOs,
leading to a ~100 transactions chain. While I can't name them, a fair
amount of people here have used this service, so definitely a
legitimate use case.

Based on this I'd keep the ancestor package at current limits and
reduce the descendant package only by a factory of ~10x instead of
~40x.

At a higher level, while I believe this is a useful metric to have in
Core, I'd avoid clamping down too hard on it. I'm worried it's an
over-optimization without considering pool side as a whole.


Reply to this email directly or view it on GitHub:
#6771 (comment)

Contributor

TheBlueMatt commented Oct 8, 2015

You stated that you saw a long transaction chain in production, but no reason was given as to why your customer had to chain transactions together instead of using other utxos? Are you suggesting that this customer did not otherwise have funds available to pay out when they needed to and, thus, had to chain their own transactions?

Calling this a "useful metric" is a bit of an understatement. These limits, together, directly govern the complexity of mempool limiting algorithms and, to a slightly lesser extent, their effectiveness.

On October 7, 2015 7:56:36 PM PDT, Matthieu Riou notifications@github.com wrote:

The largest number of unconfirmed transaction on a single address we've
seen at BlockCypher was 164. That particular customer had a burst of
activity from its end-users during a period of full blocks and long
block gaps.Those 164 transactions were unevenly distributed on 3 UTXOs,
leading to a ~100 transactions chain. While I can't name them, a fair
amount of people here have used this service, so definitely a
legitimate use case.

Based on this I'd keep the ancestor package at current limits and
reduce the descendant package only by a factory of ~10x instead of
~40x.

At a higher level, while I believe this is a useful metric to have in
Core, I'd avoid clamping down too hard on it. I'm worried it's an
over-optimization without considering pool side as a whole.


Reply to this email directly or view it on GitHub:
#6771 (comment)

@matthieu

This comment has been minimized.

Show comment
Hide comment
@matthieu

matthieu Oct 8, 2015

Ideally they'd have shredded their UTXOs a little better but didn't. Funds were available but people don't anticipate worst case situations well. They could have been more diligent but on the other hand, the more limits people can hit on transacting, the more complex developing around these limits become.

matthieu commented Oct 8, 2015

Ideally they'd have shredded their UTXOs a little better but didn't. Funds were available but people don't anticipate worst case situations well. They could have been more diligent but on the other hand, the more limits people can hit on transacting, the more complex developing around these limits become.

@TheBlueMatt

This comment has been minimized.

Show comment
Hide comment
@TheBlueMatt

TheBlueMatt Oct 8, 2015

Contributor

Hmm, so if they had available UTXOs, were they spending UTXOs which they received from others or just making their own chains? The Bitcoin Core tx selection of "first pick txn which are already confirmed" is generally a good idea for hygene. If they were spending UTXOs others had sent them then it's something we should actively discourage anyway.

On October 7, 2015 9:10:40 PM PDT, Matthieu Riou notifications@github.com wrote:

Ideally they'd have shredded their UTXOs a little better but didn't.
Funds were available but people don't anticipate worst case situations
well. They could have been more diligent but on the other hand, the
more limits people can hit on transacting, the more complex developing
around these limits become.


Reply to this email directly or view it on GitHub:
#6771 (comment)

Contributor

TheBlueMatt commented Oct 8, 2015

Hmm, so if they had available UTXOs, were they spending UTXOs which they received from others or just making their own chains? The Bitcoin Core tx selection of "first pick txn which are already confirmed" is generally a good idea for hygene. If they were spending UTXOs others had sent them then it's something we should actively discourage anyway.

On October 7, 2015 9:10:40 PM PDT, Matthieu Riou notifications@github.com wrote:

Ideally they'd have shredded their UTXOs a little better but didn't.
Funds were available but people don't anticipate worst case situations
well. They could have been more diligent but on the other hand, the
more limits people can hit on transacting, the more complex developing
around these limits become.


Reply to this email directly or view it on GitHub:
#6771 (comment)

@taariq

This comment has been minimized.

Show comment
Hide comment
@taariq

taariq Oct 8, 2015

We, at Serica and DigitalTangible actively use unspent tx chains to allow our customers to speed their bitcoin user-experience without the need for them to wait on blockchain confirmations. These transactions are usually sequential and must happen between our customers and our marketplace of merchants and other customers. For example, a user agrees to place an order to purchase bitcoin and then make a bitcoin payment, for a product or services, with that bitcoin, should their desired price be met while they are unable to actively monitor their transaction.

We currently do not have a need to chain more than 5 unspents given our current use cases for onboarding new customers into the bitcoin ecosystem. Given this PR, we agree with its principle, since the proposal aims to limit to MAX_ANCESTORS = 25 and MAX_DESCENDANTS = 25, which we think is reasonable. We have not yet seen a use case for more than 25 chains of unconfirmed in our ecosystem.

However, we would like to publish our viewpoint that we wish to avoid a slippery slope of restrictions in unspents to fall from from 25 to 2 or even 0. The limits imposed should not prevent, at minimum, 5 step chains of transactions that are needed to give a customer temporary control over funds that they would otherwise not have access to unless they waited for a confirmation before conducting another transaction. In these situations, where an instant purchase must be made with customer control, that btc must be sent to a customers address and then be quickly relayed to a merchant or another party in a transaction to create a seamless experience. All of this must happen within 0 confs because our customers will not wait for a whole confirm and we do not wish to lose the opportunity to make Bitcoin more available and useful to a broader audience with higher performance demands.

Zero confirmations, when done properly, help increase adoption of Bitcoin and make it more competitive against other forms of payments. However, we do think it's good to prevent abuse of the system with reasonable constraints for the current ecosystem of applications and wallets.

taariq commented Oct 8, 2015

We, at Serica and DigitalTangible actively use unspent tx chains to allow our customers to speed their bitcoin user-experience without the need for them to wait on blockchain confirmations. These transactions are usually sequential and must happen between our customers and our marketplace of merchants and other customers. For example, a user agrees to place an order to purchase bitcoin and then make a bitcoin payment, for a product or services, with that bitcoin, should their desired price be met while they are unable to actively monitor their transaction.

We currently do not have a need to chain more than 5 unspents given our current use cases for onboarding new customers into the bitcoin ecosystem. Given this PR, we agree with its principle, since the proposal aims to limit to MAX_ANCESTORS = 25 and MAX_DESCENDANTS = 25, which we think is reasonable. We have not yet seen a use case for more than 25 chains of unconfirmed in our ecosystem.

However, we would like to publish our viewpoint that we wish to avoid a slippery slope of restrictions in unspents to fall from from 25 to 2 or even 0. The limits imposed should not prevent, at minimum, 5 step chains of transactions that are needed to give a customer temporary control over funds that they would otherwise not have access to unless they waited for a confirmation before conducting another transaction. In these situations, where an instant purchase must be made with customer control, that btc must be sent to a customers address and then be quickly relayed to a merchant or another party in a transaction to create a seamless experience. All of this must happen within 0 confs because our customers will not wait for a whole confirm and we do not wish to lose the opportunity to make Bitcoin more available and useful to a broader audience with higher performance demands.

Zero confirmations, when done properly, help increase adoption of Bitcoin and make it more competitive against other forms of payments. However, we do think it's good to prevent abuse of the system with reasonable constraints for the current ecosystem of applications and wallets.

@matthieu

This comment has been minimized.

Show comment
Hide comment
@matthieu

matthieu Oct 8, 2015

@TheBlueMatt their own chain. And confirmed first was used, just a faster stream of incoming than confirming with not enough UTXOs in that period of time.

matthieu commented Oct 8, 2015

@TheBlueMatt their own chain. And confirmed first was used, just a faster stream of incoming than confirming with not enough UTXOs in that period of time.

@TheBlueMatt

This comment has been minimized.

Show comment
Hide comment
@TheBlueMatt

TheBlueMatt Oct 8, 2015

Contributor

@taariq yea, I wouldnt be worried about that. No one wants to set a package size limit, its only a requirement for the mempool limiting and tracking algorithms to have reasonable upper limits on runtime. Still, I think 5 is more than anyone else had seen a use-case for, so thanks for the feeback :)

Contributor

TheBlueMatt commented Oct 8, 2015

@taariq yea, I wouldnt be worried about that. No one wants to set a package size limit, its only a requirement for the mempool limiting and tracking algorithms to have reasonable upper limits on runtime. Still, I think 5 is more than anyone else had seen a use-case for, so thanks for the feeback :)

@TheBlueMatt

This comment has been minimized.

Show comment
Hide comment
@TheBlueMatt

TheBlueMatt Oct 8, 2015

Contributor

@matthieu Ahh, ok, thanks for the clarification. That is quite...annoying. I'm not sure I'd trust the code with much higher limits yet, but maybe it warrants more study instead of just a belt-and-suspenders limit, then.

Contributor

TheBlueMatt commented Oct 8, 2015

@matthieu Ahh, ok, thanks for the clarification. That is quite...annoying. I'm not sure I'd trust the code with much higher limits yet, but maybe it warrants more study instead of just a belt-and-suspenders limit, then.

@pstratem

This comment has been minimized.

Show comment
Hide comment
@pstratem

pstratem Oct 8, 2015

Contributor

@matthieu it sounds like they could have been using sendmany

Contributor

pstratem commented Oct 8, 2015

@matthieu it sounds like they could have been using sendmany

@matthieu

This comment has been minimized.

Show comment
Hide comment
@matthieu

matthieu Oct 8, 2015

@pstratem there are several things they could have done better. How much do you want people to know to be able to transact without bad surprises tho? Making it more complex is good for services like ours that swallow the complexity, but not for the ecosystem in general. Odd that I should be making that point.

matthieu commented Oct 8, 2015

@pstratem there are several things they could have done better. How much do you want people to know to be able to transact without bad surprises tho? Making it more complex is good for services like ours that swallow the complexity, but not for the ecosystem in general. Odd that I should be making that point.

@pstratem

This comment has been minimized.

Show comment
Hide comment
@pstratem

pstratem Oct 8, 2015

Contributor

@matthieu eh? the sendmany rpc call is right there...

Contributor

pstratem commented Oct 8, 2015

@matthieu eh? the sendmany rpc call is right there...

@TheBlueMatt

This comment has been minimized.

Show comment
Hide comment
@TheBlueMatt

TheBlueMatt Oct 8, 2015

Contributor

I think the point was they're not using Bitcoin Core.

On October 7, 2015 11:48:06 PM PDT, Patrick Strateman notifications@github.com wrote:

@matthieu eh? the sendmany rpc call is right there...


Reply to this email directly or view it on GitHub:
#6771 (comment)

Contributor

TheBlueMatt commented Oct 8, 2015

I think the point was they're not using Bitcoin Core.

On October 7, 2015 11:48:06 PM PDT, Patrick Strateman notifications@github.com wrote:

@matthieu eh? the sendmany rpc call is right there...


Reply to this email directly or view it on GitHub:
#6771 (comment)

@matthieu

This comment has been minimized.

Show comment
Hide comment
@matthieu

matthieu Oct 8, 2015

The point is also they had no prior knowledge they would either need to batch their transactions or ever hit a transaction peak that high. Send as it comes is what everyone starts with, batching can get tricky fast.

matthieu commented Oct 8, 2015

The point is also they had no prior knowledge they would either need to batch their transactions or ever hit a transaction peak that high. Send as it comes is what everyone starts with, batching can get tricky fast.

@morcos

This comment has been minimized.

Show comment
Hide comment
@morcos

morcos Oct 8, 2015

Member

It would be helpful to know whether there are other use cases like this that would be hampered.
I think if its once in a blue moon something that used to work would now need some more careful transaction preparation, then maybe this change is worth it in the balance.

To Matt's point of the reasoning behind this change: it's also an understatement to say its just needed to govern complexity and effectiveness of mempool limiting. There are attacks which can cause the mempools of full nodes to become almost permanently unusuable, which would prevent transaction relay and basically bring the network to a halt. How low we have to make these limits to prevent that may not be an exact science. The good thing is that if we do decide to start with a higher limit than 25, and these attacks are seen in the wild, its only a command line argument to change it. But as we've seen with the minimum relay fee and the stress tests, its still an issue if people have to bring down their nodes and restart them with different parameters to respond to the attack.

Member

morcos commented Oct 8, 2015

It would be helpful to know whether there are other use cases like this that would be hampered.
I think if its once in a blue moon something that used to work would now need some more careful transaction preparation, then maybe this change is worth it in the balance.

To Matt's point of the reasoning behind this change: it's also an understatement to say its just needed to govern complexity and effectiveness of mempool limiting. There are attacks which can cause the mempools of full nodes to become almost permanently unusuable, which would prevent transaction relay and basically bring the network to a halt. How low we have to make these limits to prevent that may not be an exact science. The good thing is that if we do decide to start with a higher limit than 25, and these attacks are seen in the wild, its only a command line argument to change it. But as we've seen with the minimum relay fee and the stress tests, its still an issue if people have to bring down their nodes and restart them with different parameters to respond to the attack.

@MarcoFalke

This comment has been minimized.

Show comment
Hide comment
@MarcoFalke

MarcoFalke Oct 8, 2015

Member

NACK based on my reply below.

@pstratem they could have been using sendmany

You can't bundle customer's requests of transactions which have to be sent out immediately.

@morcos But as we've seen with the minimum relay fee and the stress tests, its still an issue if people have to bring down their nodes and restart them with different parameters to respond to the attack.

If you don't have the resources to run a node with the default settings, you can still use the -limit*count params to adjust them to your available resources. Also, This PR doesn't touch the release notes to educate users about the change and possible issues (and how to solve them). This is yet another instance of the too broad and misleading "-4 error message" [1]. Abusing the mempool to reject transactions created by the wallet is the wrong way to go. If there is an issue how the wallet creates transactions, this should be tackled instead. However, the wallet already prefers confirmed inputs. If there are no confirmed inputs but only unconfirmed change there is nothing wrong about the transaction and the mempool should not intervene.

[1]: Error: The transaction was rejected! This might happen if some of the coins in your wallet were already spent, such as if you used a copy of wallet.dat and coins were spent in the copy but not marked as spent here. (code -4)

Member

MarcoFalke commented Oct 8, 2015

NACK based on my reply below.

@pstratem they could have been using sendmany

You can't bundle customer's requests of transactions which have to be sent out immediately.

@morcos But as we've seen with the minimum relay fee and the stress tests, its still an issue if people have to bring down their nodes and restart them with different parameters to respond to the attack.

If you don't have the resources to run a node with the default settings, you can still use the -limit*count params to adjust them to your available resources. Also, This PR doesn't touch the release notes to educate users about the change and possible issues (and how to solve them). This is yet another instance of the too broad and misleading "-4 error message" [1]. Abusing the mempool to reject transactions created by the wallet is the wrong way to go. If there is an issue how the wallet creates transactions, this should be tackled instead. However, the wallet already prefers confirmed inputs. If there are no confirmed inputs but only unconfirmed change there is nothing wrong about the transaction and the mempool should not intervene.

[1]: Error: The transaction was rejected! This might happen if some of the coins in your wallet were already spent, such as if you used a copy of wallet.dat and coins were spent in the copy but not marked as spent here. (code -4)

@morcos

This comment has been minimized.

Show comment
Hide comment
@morcos

morcos Oct 8, 2015

Member

@MarcoFalke Just to reemphasize this point. This is not only about resources. Very powerful nodes can still be attacked if they have a mempool with some size limit.

I do agree that better wallet UI is needed, but that problem is not introduced by this pull.

If we don't make the default small enough to be safe, then most people will have to configure their own policy. I suspect that will lead to a great many nodes limiting chains to much smaller counts.

Member

morcos commented Oct 8, 2015

@MarcoFalke Just to reemphasize this point. This is not only about resources. Very powerful nodes can still be attacked if they have a mempool with some size limit.

I do agree that better wallet UI is needed, but that problem is not introduced by this pull.

If we don't make the default small enough to be safe, then most people will have to configure their own policy. I suspect that will lead to a great many nodes limiting chains to much smaller counts.

@matthieu

This comment has been minimized.

Show comment
Hide comment
@matthieu

matthieu Oct 8, 2015

@morcos Given this isn't a consensus change, any issue with lowering it gradually (over a few releases) rather than cranking it down all of a sudden? Try with 50 ancestors and 100 descendants for a bit and then go lower if no real difficulty has been found?

matthieu commented Oct 8, 2015

@morcos Given this isn't a consensus change, any issue with lowering it gradually (over a few releases) rather than cranking it down all of a sudden? Try with 50 ancestors and 100 descendants for a bit and then go lower if no real difficulty has been found?

@dthorpe

This comment has been minimized.

Show comment
Hide comment
@dthorpe

dthorpe Oct 8, 2015

+1 to Taqriq's use case.

I also have a transaction system implemented which involves multi-step transactions which spend from UTXOs that are known to us but may yet be unconfirmed by the blockchain. Completely independent of taariq's use case, but similar in that chains of txs are created and submitted to the mempool together without waiting for sequential block confirmation of each tx in the chain.

Due to signature constraints, our multiple steps cannot be coalesced into a single transaction for publication. (Coalescing could be possible with improvements to sigHash capabilities) Therefore, we end up with multiple transactions in a chain which are all submitted to the peer network together.

In our current implementation, these chains can be 3 to 4 levels deep per "user transaction". However, if the blockchain is taking 20 minutes to produce the next block, a user initiating multiple "user transactions" (buy x, then buy y, then buy z) in that 20 minutes could build on that unspent chain. The additional operations are not strictly linear - subsequent operations would spend the change outputs of the previous operation's root transaction, forming a tree of chained txs. A set of 3 operations would form a tree with a max depth of 7 and a total node count of 12.

The "sendmany" RPC function is of no use to me because it only works within the limits of the reference implementation wallet - single signature outputs. Our system is built on multisignature wallets. We run off-the-shelf bitcoind full nodes, but we never use the wallet or address functions.

dthorpe commented Oct 8, 2015

+1 to Taqriq's use case.

I also have a transaction system implemented which involves multi-step transactions which spend from UTXOs that are known to us but may yet be unconfirmed by the blockchain. Completely independent of taariq's use case, but similar in that chains of txs are created and submitted to the mempool together without waiting for sequential block confirmation of each tx in the chain.

Due to signature constraints, our multiple steps cannot be coalesced into a single transaction for publication. (Coalescing could be possible with improvements to sigHash capabilities) Therefore, we end up with multiple transactions in a chain which are all submitted to the peer network together.

In our current implementation, these chains can be 3 to 4 levels deep per "user transaction". However, if the blockchain is taking 20 minutes to produce the next block, a user initiating multiple "user transactions" (buy x, then buy y, then buy z) in that 20 minutes could build on that unspent chain. The additional operations are not strictly linear - subsequent operations would spend the change outputs of the previous operation's root transaction, forming a tree of chained txs. A set of 3 operations would form a tree with a max depth of 7 and a total node count of 12.

The "sendmany" RPC function is of no use to me because it only works within the limits of the reference implementation wallet - single signature outputs. Our system is built on multisignature wallets. We run off-the-shelf bitcoind full nodes, but we never use the wallet or address functions.

@petertodd

This comment has been minimized.

Show comment
Hide comment
@petertodd

petertodd Oct 8, 2015

Contributor

@dthorpe Can you explain more what that application actually is? Are you sure you're not vulnerable to malleability?

Contributor

petertodd commented Oct 8, 2015

@dthorpe Can you explain more what that application actually is? Are you sure you're not vulnerable to malleability?

@petertodd

This comment has been minimized.

Show comment
Hide comment
@petertodd

petertodd Oct 8, 2015

Contributor

Note that for the case where you can't use 'sendmany' because you need to send funds immediately, you could instead use replace-by-fee to rewrite the txout set, splitting the change txout into the new payee and change. This style of payment uses significantly less blockchain space than chained transactions.

Contributor

petertodd commented Oct 8, 2015

Note that for the case where you can't use 'sendmany' because you need to send funds immediately, you could instead use replace-by-fee to rewrite the txout set, splitting the change txout into the new payee and change. This style of payment uses significantly less blockchain space than chained transactions.

@dthorpe

This comment has been minimized.

Show comment
Hide comment
@dthorpe

dthorpe Oct 8, 2015

@petertodd I'd be happy to discuss in more detail offline. first.last@gmail.com

dthorpe commented Oct 8, 2015

@petertodd I'd be happy to discuss in more detail offline. first.last@gmail.com

@morcos

This comment has been minimized.

Show comment
Hide comment
@morcos

morcos Oct 8, 2015

Member

@matthieu I agree it would be good to give people time to adjust their applications to a lower limit, but that will happen naturally as more and more of the network upgrades to 0.12. But yes, if there is too much resistance to 25/25, we might have to start with higher limits. I think node performance should be reasonable with 50/100 but I worry about attacks. Maybe there is some middle ground where some nodes run with higher limits and some lower. Highly chained txs would only be relayed through the nodes with higher limits, but in the event of the attack the lower limit nodes would keep relay open for the vast majority of txs.

Member

morcos commented Oct 8, 2015

@matthieu I agree it would be good to give people time to adjust their applications to a lower limit, but that will happen naturally as more and more of the network upgrades to 0.12. But yes, if there is too much resistance to 25/25, we might have to start with higher limits. I think node performance should be reasonable with 50/100 but I worry about attacks. Maybe there is some middle ground where some nodes run with higher limits and some lower. Highly chained txs would only be relayed through the nodes with higher limits, but in the event of the attack the lower limit nodes would keep relay open for the vast majority of txs.

@petertodd

This comment has been minimized.

Show comment
Hide comment
@petertodd

petertodd Oct 8, 2015

Contributor

@dthorpe I can't take on any more consulting for the next few weeks, but you might find this link post about replace-by-fee use-cases useful: http://www.mail-archive.com/bitcoin-development@lists.sourceforge.net/msg07813.html

Contributor

petertodd commented Oct 8, 2015

@dthorpe I can't take on any more consulting for the next few weeks, but you might find this link post about replace-by-fee use-cases useful: http://www.mail-archive.com/bitcoin-development@lists.sourceforge.net/msg07813.html

@petertodd

This comment has been minimized.

Show comment
Hide comment
@petertodd

petertodd Oct 8, 2015

Contributor

@dthorpe I've got a opt-in full-RBF patch ready BTW, and will propose it as a pull-req once this mempool stuff settles down. (hopefully in time for v0.12!) Code is reasonably well tested, as f2pool uses (most of) it.

Contributor

petertodd commented Oct 8, 2015

@dthorpe I've got a opt-in full-RBF patch ready BTW, and will propose it as a pull-req once this mempool stuff settles down. (hopefully in time for v0.12!) Code is reasonably well tested, as f2pool uses (most of) it.

@dthorpe

This comment has been minimized.

Show comment
Hide comment
@dthorpe

dthorpe Oct 8, 2015

@petertodd I've been following the CPFP and RBF discussions with interest. Not so much to entirely replace the tx chains we have, but to reduce the number of outputs necessary to pay network fees across the chain, and improve the all-or-none granularity of the tx set for inclusion in a block.

I gave a slightly more specific example of my use case on the bitcoin-dev list a few weeks ago: http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-August/010569.html Basically, a brokered escrow transaction involving 3 parties where the seller and the buyer never communicate directly.

Yes, I am aware of and concerned about transaction maleability. I'm counting on BIP62 to significantly reduce that risk. I believe the txs we generate are already BIP62 compliant. We also compute a signature-free hash of every transaction as an alternate txid so that we can at least recognize our txs in the blockchain even if their scripts have been tweaked in transit. Any issued txs downstream from that altered tx would have to be redone, of course, which is a pain in the neck and extremely inconvenient to all involved.

dthorpe commented Oct 8, 2015

@petertodd I've been following the CPFP and RBF discussions with interest. Not so much to entirely replace the tx chains we have, but to reduce the number of outputs necessary to pay network fees across the chain, and improve the all-or-none granularity of the tx set for inclusion in a block.

I gave a slightly more specific example of my use case on the bitcoin-dev list a few weeks ago: http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-August/010569.html Basically, a brokered escrow transaction involving 3 parties where the seller and the buyer never communicate directly.

Yes, I am aware of and concerned about transaction maleability. I'm counting on BIP62 to significantly reduce that risk. I believe the txs we generate are already BIP62 compliant. We also compute a signature-free hash of every transaction as an alternate txid so that we can at least recognize our txs in the blockchain even if their scripts have been tweaked in transit. Any issued txs downstream from that altered tx would have to be redone, of course, which is a pain in the neck and extremely inconvenient to all involved.

@pstratem

This comment has been minimized.

Show comment
Hide comment
@pstratem

pstratem Oct 8, 2015

Contributor

This limit should be much lower than 25.

Indeed the only reason to relay chains of unconfirmed transactions is CPFP.

Contributor

pstratem commented Oct 8, 2015

This limit should be much lower than 25.

Indeed the only reason to relay chains of unconfirmed transactions is CPFP.

@dexX7

This comment has been minimized.

Show comment
Hide comment
@dexX7

dexX7 Oct 8, 2015

Contributor

Indeed the only reason to relay chains of unconfirmed transactions is CPFP.

I assume this statement is based on the assumption that there are unspent confirmed outputs available, in case there is need to send many transactions in a short timeframe? If so, it seems like a trade off, and it may result in situations where a lot of unspent outputs are prepared, just to have some available.

I can imagine that the reduction of the package/chain size might be useful, but I'm wondering, if there is any indication that it really solves, or would have mitigated the previous/ongoing/potential mempool bloat [in the future]?

Contributor

dexX7 commented Oct 8, 2015

Indeed the only reason to relay chains of unconfirmed transactions is CPFP.

I assume this statement is based on the assumption that there are unspent confirmed outputs available, in case there is need to send many transactions in a short timeframe? If so, it seems like a trade off, and it may result in situations where a lot of unspent outputs are prepared, just to have some available.

I can imagine that the reduction of the package/chain size might be useful, but I'm wondering, if there is any indication that it really solves, or would have mitigated the previous/ongoing/potential mempool bloat [in the future]?

@dthorpe

This comment has been minimized.

Show comment
Hide comment
@dthorpe

dthorpe Oct 8, 2015

Indeed the only reason to relay chains of unconfirmed transactions is CPFP.

Certainly, CPFP would provide great benefits for asynchronous brokered exchange scenarios (A => B => C as a pseudo "atomic" set of transactions), but until CPFP is actually deployed and usable (and/or sigHash allows merging of independent fully signed txs), we're stuck with more primitive techniques such as zero-conf spending chains.

And even if/when CPFP is available for general use, why would that indicate that the tx chain limit should be much lower than 25? If anything, availability of CPFP would likely encourage use of such pseudo atomic transaction sets.

Frankly, I feel support for some form of merging signed transactions into an atomic blob would be a better solution for my use case than CPFP, providing true atomicity by coalescing all actors into one bitcoin transaction. Until something along those lines can be sorted out, we work with what is available (and complain loudly at every opportunity! :> )

dthorpe commented Oct 8, 2015

Indeed the only reason to relay chains of unconfirmed transactions is CPFP.

Certainly, CPFP would provide great benefits for asynchronous brokered exchange scenarios (A => B => C as a pseudo "atomic" set of transactions), but until CPFP is actually deployed and usable (and/or sigHash allows merging of independent fully signed txs), we're stuck with more primitive techniques such as zero-conf spending chains.

And even if/when CPFP is available for general use, why would that indicate that the tx chain limit should be much lower than 25? If anything, availability of CPFP would likely encourage use of such pseudo atomic transaction sets.

Frankly, I feel support for some form of merging signed transactions into an atomic blob would be a better solution for my use case than CPFP, providing true atomicity by coalescing all actors into one bitcoin transaction. Until something along those lines can be sorted out, we work with what is available (and complain loudly at every opportunity! :> )

@morcos

This comment has been minimized.

Show comment
Hide comment
@morcos

morcos Oct 14, 2015

Member

I've updated the size limits to 101kB.

I understand there is still some objection to 25 as a count limit, but I'd like to err on the side of what will make the network useable for the vast majority of participants. Even 25 is a compromise in this sense, because until we have better mechanisms for dealing with large chains of transactions, abuse is still possible within the 25 limit.

Member

morcos commented Oct 14, 2015

I've updated the size limits to 101kB.

I understand there is still some objection to 25 as a count limit, but I'd like to err on the side of what will make the network useable for the vast majority of participants. Even 25 is a compromise in this sense, because until we have better mechanisms for dealing with large chains of transactions, abuse is still possible within the 25 limit.

@sdaftuar

This comment has been minimized.

Show comment
Hide comment
@sdaftuar

sdaftuar Oct 14, 2015

Member

ACK. It doesn't yet seem like there are any common use cases anyone has brought up that would be hindered by these limits. Agree with @MarcoFalke about updating the release notes but I don't think that should hold up this pull (there's much more to explain about mempool changes than just this particular set of limits).

Member

sdaftuar commented Oct 14, 2015

ACK. It doesn't yet seem like there are any common use cases anyone has brought up that would be hindered by these limits. Agree with @MarcoFalke about updating the release notes but I don't think that should hold up this pull (there's much more to explain about mempool changes than just this particular set of limits).

@TheBlueMatt

This comment has been minimized.

Show comment
Hide comment
@TheBlueMatt

TheBlueMatt Oct 14, 2015

Contributor

ACK. I don't like doing this because it does make some things more annoying for developers, but, honestly, as long as they retransmit transactions as their chains are confirmed, 25 txn/block isn't so bad.

On October 14, 2015 11:33:31 AM PDT, Suhas Daftuar notifications@github.com wrote:

ACK. It doesn't yet seem like there are any common use cases anyone
has brought up that would be hindered by these limits. Agree with
@MarcoFalke about updating the release notes but I don't think that
should hold up this pull (there's much more to explain about mempool
changes than just this particular set of limits).


Reply to this email directly or view it on GitHub:
#6771 (comment)

Contributor

TheBlueMatt commented Oct 14, 2015

ACK. I don't like doing this because it does make some things more annoying for developers, but, honestly, as long as they retransmit transactions as their chains are confirmed, 25 txn/block isn't so bad.

On October 14, 2015 11:33:31 AM PDT, Suhas Daftuar notifications@github.com wrote:

ACK. It doesn't yet seem like there are any common use cases anyone
has brought up that would be hindered by these limits. Agree with
@MarcoFalke about updating the release notes but I don't think that
should hold up this pull (there's much more to explain about mempool
changes than just this particular set of limits).


Reply to this email directly or view it on GitHub:
#6771 (comment)

@petertodd

This comment has been minimized.

Show comment
Hide comment
@petertodd

petertodd Oct 23, 2015

Contributor

utACK

#6871 gives high-volume applications a good way to avoid long chains.

Contributor

petertodd commented Oct 23, 2015

utACK

#6871 gives high-volume applications a good way to avoid long chains.

@dcousens

This comment has been minimized.

Show comment
Hide comment
@dcousens

dcousens Oct 26, 2015

Contributor

concept ACK

Contributor

dcousens commented Oct 26, 2015

concept ACK

@MarcoFalke

This comment has been minimized.

Show comment
Hide comment
@MarcoFalke

MarcoFalke Oct 26, 2015

Member

Needs rebase due to #6722.

"for the particular purpose of attacking the entire Bitcoin system"

Is this attack still possible after #6722 got merged?

Member

MarcoFalke commented Oct 26, 2015

Needs rebase due to #6722.

"for the particular purpose of attacking the entire Bitcoin system"

Is this attack still possible after #6722 got merged?

@TheBlueMatt

This comment has been minimized.

Show comment
Hide comment
@TheBlueMatt

TheBlueMatt Oct 27, 2015

Contributor

@MarcoFalke this pull was primarily done to make #6722 have stricter CPU-usage bounds to prevent DoS, so, yes, this is still something we want after #6722.

Contributor

TheBlueMatt commented Oct 27, 2015

@MarcoFalke this pull was primarily done to make #6722 have stricter CPU-usage bounds to prevent DoS, so, yes, this is still something we want after #6722.

@sipa

This comment has been minimized.

Show comment
Hide comment
@sipa

sipa Oct 28, 2015

Member

Needs rebase.

Member

sipa commented Oct 28, 2015

Needs rebase.

Lower default policy limits
Reduce the default limits on maximum number of transactions and the cumulative size of those transactions in both ancestor and descendant packages to 25 txs and 101kb total size.
@morcos

This comment has been minimized.

Show comment
Hide comment
@morcos

morcos Oct 28, 2015

Member

sorry, rebased

Member

morcos commented Oct 28, 2015

sorry, rebased

@muneeb-ali muneeb-ali referenced this pull request in blockstack/blockstack-core Oct 28, 2015

Closed

Unable to lookup justinguy.id that was registered via Onename.com #164

@sdaftuar

This comment has been minimized.

Show comment
Hide comment
@sdaftuar

sdaftuar Nov 5, 2015

Member

I've generated a lot of historical data on blocks to look at the length of chains we see within blocks. For each block I looked at, I calculated for every transaction the number of in-block ancestors and in-block descendants, along with size of those ancestors and size of those descendants. Detailed data is available here:

https://www.dropbox.com/s/kkh0g4kl9qwt1cx/alldata2.gz?dl=0

Top line of the file is a header identifying each column. (The blocks are not in any strict ordering, it's based on the order they're written to my blk*.dat files in the data dir on the machine I ran on, and this includes all blocks including ones that may have been reorged out.)

Member

sdaftuar commented Nov 5, 2015

I've generated a lot of historical data on blocks to look at the length of chains we see within blocks. For each block I looked at, I calculated for every transaction the number of in-block ancestors and in-block descendants, along with size of those ancestors and size of those descendants. Detailed data is available here:

https://www.dropbox.com/s/kkh0g4kl9qwt1cx/alldata2.gz?dl=0

Top line of the file is a header identifying each column. (The blocks are not in any strict ordering, it's based on the order they're written to my blk*.dat files in the data dir on the machine I ran on, and this includes all blocks including ones that may have been reorged out.)

@gmaxwell gmaxwell added this to the 0.12.0 milestone Nov 5, 2015

@ABISprotocol

This comment has been minimized.

Show comment
Hide comment
@ABISprotocol

ABISprotocol Nov 6, 2015

NACK. Please see running commentary in this thread for details as to why, for example thoughts contributed from @matthieu and @taariq. I'd like to see, in particular, BlockCypher's concerns addressed (having this PR changed accordingly) or having this PR closed (not being merged).

NACK. Please see running commentary in this thread for details as to why, for example thoughts contributed from @matthieu and @taariq. I'd like to see, in particular, BlockCypher's concerns addressed (having this PR changed accordingly) or having this PR closed (not being merged).

@MarcoFalke

This comment has been minimized.

Show comment
Hide comment
@MarcoFalke

MarcoFalke Nov 6, 2015

Member

stricter CPU-usage bounds

Is there an estimation that gives the CPU-usage as a function of the limits? Is it non-linear?

Member

MarcoFalke commented Nov 6, 2015

stricter CPU-usage bounds

Is there an estimation that gives the CPU-usage as a function of the limits? Is it non-linear?

jgarzik added a commit to jgarzik/bitcoin that referenced this pull request Nov 12, 2015

@laanwj

This comment has been minimized.

Show comment
Hide comment
@laanwj

laanwj Nov 12, 2015

Member

utACK

Member

laanwj commented Nov 12, 2015

utACK

@jgarzik jgarzik merged commit 971a4e6 into bitcoin:master Nov 12, 2015

1 check passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details
@gmaxwell

This comment has been minimized.

Show comment
Hide comment
@gmaxwell

gmaxwell Nov 12, 2015

Member

@dthorpe Didn't see your above comment until now "I'm counting on BIP62 to significantly reduce that risk." BIP62 cannot remove malleability from multisignature transactions; and I believe it is unlikely that BIP62 will ever be deployed on the Bitcoin network due to it's invasiveness and necessary incompleteness. (Well, pedantically, BIP62 as is will not be deployed as its outdated; a revised version could be, but I'm doubtful of it due to the limitations in the approach). Though I'm hopeful about more complete things being done in the future, they wouldn't be BIP62. I bring it up not to invite a debate over it, but just to level set-- you shouldn't be expecting anything to just happen there.

Member

gmaxwell commented Nov 12, 2015

@dthorpe Didn't see your above comment until now "I'm counting on BIP62 to significantly reduce that risk." BIP62 cannot remove malleability from multisignature transactions; and I believe it is unlikely that BIP62 will ever be deployed on the Bitcoin network due to it's invasiveness and necessary incompleteness. (Well, pedantically, BIP62 as is will not be deployed as its outdated; a revised version could be, but I'm doubtful of it due to the limitations in the approach). Though I'm hopeful about more complete things being done in the future, they wouldn't be BIP62. I bring it up not to invite a debate over it, but just to level set-- you shouldn't be expecting anything to just happen there.

@dthorpe

This comment has been minimized.

Show comment
Hide comment
@dthorpe

dthorpe Nov 16, 2015

@gmaxwell understood, thanks.

dthorpe commented Nov 16, 2015

@gmaxwell understood, thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment