Limit mempool by throwing away the cheapest txn and setting min relay fee to it #6722

Merged
merged 13 commits into from Oct 21, 2015

Conversation

Projects
None yet
@TheBlueMatt
Contributor

TheBlueMatt commented Sep 25, 2015

Tests forthcoming, but I felt bad I still hadnt pushed this.
See commitmsg for more details.

@jonasschnelli

View changes

src/txmempool.h
+ mutable int64_t lastRollingFeeUpdate;
+ mutable bool blockSinceLastRollingFeeBump;
+ mutable double rollingMinimumFeeRate; //! minimum fee to get into the pool, decreases exponentially
+ static const double ROLLING_FEE_HALFLIFE = 60 * 60 * 24;

This comment has been minimized.

@jonasschnelli

jonasschnelli Sep 25, 2015

Member

travis complains about a missing mutable bool blockSinceLastRollingFeeUpdate; here.

@jonasschnelli

jonasschnelli Sep 25, 2015

Member

travis complains about a missing mutable bool blockSinceLastRollingFeeUpdate; here.

@JeremyRubin

This comment has been minimized.

Show comment
Hide comment
@JeremyRubin

JeremyRubin Sep 25, 2015

Contributor

One thing that I think is maybe not great about the behavior of this, is let's say we have:

TXs:
A, Fee 10, Size 1
B, Fee 10, Size 1
C, Fee 21, Size 2

If A and B are the min in the set, submitting C should kick them out. Now, let's say B wanted to increase their fee, they would need to go above 21 to get in. As implemented, it doesn't seem to me that two TX's could both raise by 1 to, combined, provide more fee (because it seems tx's get added one at a time?)

Perhaps a better compromise between these two behaviors would be to have a two part mempool, the inclusion set and the to-be ousted set and trigger a "GC" with some frequency. The to be ousted-set can be RBF'd or something.

Lastly justification on who might take advantage of such a behavior, perhaps a major exchange with a bunch of settlements out at once would want to make sure they all go through expediently and can coordinate increasing them all a hair.

Contributor

JeremyRubin commented Sep 25, 2015

One thing that I think is maybe not great about the behavior of this, is let's say we have:

TXs:
A, Fee 10, Size 1
B, Fee 10, Size 1
C, Fee 21, Size 2

If A and B are the min in the set, submitting C should kick them out. Now, let's say B wanted to increase their fee, they would need to go above 21 to get in. As implemented, it doesn't seem to me that two TX's could both raise by 1 to, combined, provide more fee (because it seems tx's get added one at a time?)

Perhaps a better compromise between these two behaviors would be to have a two part mempool, the inclusion set and the to-be ousted set and trigger a "GC" with some frequency. The to be ousted-set can be RBF'd or something.

Lastly justification on who might take advantage of such a behavior, perhaps a major exchange with a bunch of settlements out at once would want to make sure they all go through expediently and can coordinate increasing them all a hair.

@JeremyRubin

This comment has been minimized.

Show comment
Hide comment
@JeremyRubin

JeremyRubin Sep 25, 2015

Contributor

I think that my earlier comment is not fully needed, because mempool is a large multiple of block size, currently. Perhaps a more future proof implementation would allow setting:

  • an optional hard memory cap
  • a (potentially) dynamic size which is a large multiple of the current block size
Contributor

JeremyRubin commented Sep 25, 2015

I think that my earlier comment is not fully needed, because mempool is a large multiple of block size, currently. Perhaps a more future proof implementation would allow setting:

  • an optional hard memory cap
  • a (potentially) dynamic size which is a large multiple of the current block size
@morcos

View changes

src/txmempool.cpp
+
+ if (expsize <= sizelimit) {
+ BOOST_FOREACH(const txiter& it, stage)
+ removeUnchecked(it);

This comment has been minimized.

@morcos

morcos Sep 25, 2015

Member

You can't call this by itself anymore. Use removeStaged

@morcos

morcos Sep 25, 2015

Member

You can't call this by itself anymore. Use removeStaged

@morcos

View changes

src/txmempool.cpp
+ trackRemovedOrAddFailed(bestFeeRateRemoved);
+ return true;
+ } else {
+ trackRemovedOrAddFailed(CFeeRate(toadd.GetFee(), toadd.GetTxSize()));

This comment has been minimized.

@morcos

morcos Sep 25, 2015

Member

It doesn't make sense to bump the rolling fee for a tx that didn't get in. A very high fee tx might not make it in if there are large packages or transactions (even of low fee rate) at the bottom of the mempool. That's a problem in and of itself for the tx that doesn't get in, but it's even worse if you make that the new minimum relay rate.

@morcos

morcos Sep 25, 2015

Member

It doesn't make sense to bump the rolling fee for a tx that didn't get in. A very high fee tx might not make it in if there are large packages or transactions (even of low fee rate) at the bottom of the mempool. That's a problem in and of itself for the tx that doesn't get in, but it's even worse if you make that the new minimum relay rate.

This comment has been minimized.

@TheBlueMatt

TheBlueMatt Sep 25, 2015

Contributor

Hmm? No a very high fee tx will always evict transactions with lower feerate even if it ends up evicting a very large package to do so.

@TheBlueMatt

TheBlueMatt Sep 25, 2015

Contributor

Hmm? No a very high fee tx will always evict transactions with lower feerate even if it ends up evicting a very large package to do so.

@morcos

View changes

src/txmempool.cpp
+
+ int64_t time = GetTime();
+ if (time > lastRollingFeeUpdate + 10) {
+ rollingMinimumFeeRate = rollingMinimumFeeRate / pow(2.0, (time - lastRollingFeeUpdate) / ROLLING_FEE_HALFLIFE);

This comment has been minimized.

@morcos

morcos Sep 25, 2015

Member

I'd be concerned about the tradeoff here between one-time cost to stuff the mempool full of very high fee txs, and the length of time that stuffing causes the min relay rate to remain high. Expecially with 100MB mempool, thats only about 30MB of txs. So for example at 100k sat / kb fee rate, for 30 BTC you can knock the min relay fee up to 100k satoshis and the effect lasts for some time.

@morcos

morcos Sep 25, 2015

Member

I'd be concerned about the tradeoff here between one-time cost to stuff the mempool full of very high fee txs, and the length of time that stuffing causes the min relay rate to remain high. Expecially with 100MB mempool, thats only about 30MB of txs. So for example at 100k sat / kb fee rate, for 30 BTC you can knock the min relay fee up to 100k satoshis and the effect lasts for some time.

This comment has been minimized.

@TheBlueMatt

TheBlueMatt Sep 25, 2015

Contributor

Sure, the ROLLING_FEE_HALFLIFE could be dropped a lot. I had originally figured it based on decreasing the mempool right away, but since it now waits at least for one block before it lets the min feerate drop, I think it probably could be dropped a lot. Maybe we even dont want an exponential decrease either.

@TheBlueMatt

TheBlueMatt Sep 25, 2015

Contributor

Sure, the ROLLING_FEE_HALFLIFE could be dropped a lot. I had originally figured it based on decreasing the mempool right away, but since it now waits at least for one block before it lets the min feerate drop, I think it probably could be dropped a lot. Maybe we even dont want an exponential decrease either.

@TheBlueMatt

This comment has been minimized.

Show comment
Hide comment
@TheBlueMatt

TheBlueMatt Sep 25, 2015

Contributor

@JeremyRubin No, you're right, this breaks relaying of child-pays-for-parent when mempool grows large (assuming the package is not already present). The easy solution is to allow fee calulation of packages together when processing orphans, and then you send your package in reverse-dependancy order.

Contributor

TheBlueMatt commented Sep 25, 2015

@JeremyRubin No, you're right, this breaks relaying of child-pays-for-parent when mempool grows large (assuming the package is not already present). The easy solution is to allow fee calulation of packages together when processing orphans, and then you send your package in reverse-dependancy order.

@morcos

This comment has been minimized.

Show comment
Hide comment
@morcos

morcos Sep 25, 2015

Member

@TheBlueMatt re: my comment on high fee txs. I see now, you aren't doing the overall fee check in order to boot a package. I just assumed the StageTrimToSize logic was the same. So how do you think about free relay then? Could you write up a quick intro describing the algorithm as it would help to know how you think about it. Is the idea that all even though the tx causing the eviction hasn't covered the fees to pay for the evicted packages relay, by boosting the minRelayRate you're essentially forcing all future transactions to do so?

It's an interesting idea, one question is how big a sweet spot there is between having the half-life too long and worrying about the "cram relayFee high all of a sudden" attack vs having it too low and perhaps having some vague concern about free relay.

Why does your increased relay fee only apply to low priority transactions? I think it has to apply to all.

Member

morcos commented Sep 25, 2015

@TheBlueMatt re: my comment on high fee txs. I see now, you aren't doing the overall fee check in order to boot a package. I just assumed the StageTrimToSize logic was the same. So how do you think about free relay then? Could you write up a quick intro describing the algorithm as it would help to know how you think about it. Is the idea that all even though the tx causing the eviction hasn't covered the fees to pay for the evicted packages relay, by boosting the minRelayRate you're essentially forcing all future transactions to do so?

It's an interesting idea, one question is how big a sweet spot there is between having the half-life too long and worrying about the "cram relayFee high all of a sudden" attack vs having it too low and perhaps having some vague concern about free relay.

Why does your increased relay fee only apply to low priority transactions? I think it has to apply to all.

@TheBlueMatt

This comment has been minimized.

Show comment
Hide comment
@TheBlueMatt

TheBlueMatt Sep 26, 2015

Contributor

@morcos see the description of the main commit:
"This limits mempool by walking the lowest-feerate txn in mempool
when it goes over -maxmempool in size, removing them.
It then sets the minimum relay fee to the maximum fee
transaction-and-dependant-set it removed, plus the default minimum
relay fee. After the next block is received, the minimum relay fee
is allowed to decrease exponentially (with a half-life of one day).

The minimum -maxmempool size is 10*-limitdescendantsize, as it is
easy for an attacker to play games with the cheapest
-limitdescendantsize transactions.

Note that this effectively disables high-priority transaction relay
iff the mempool becomes large."

As for your specific questions: Yes, the idea is that you can relay some cheap crap for a bit, driving up the min relay fee by the default min relay fee each time (which was always meant as a "this is what it costs to send a transaction around the network" constant, though it hasn't always done a good job of being accurate there).

The increased relay fee will effectively apply to low priority transactions, as they will be the package selected by the final TrimToSize call. Thus, priority-based relay will effectively remain enabled until people's mempools fill up.

Contributor

TheBlueMatt commented Sep 26, 2015

@morcos see the description of the main commit:
"This limits mempool by walking the lowest-feerate txn in mempool
when it goes over -maxmempool in size, removing them.
It then sets the minimum relay fee to the maximum fee
transaction-and-dependant-set it removed, plus the default minimum
relay fee. After the next block is received, the minimum relay fee
is allowed to decrease exponentially (with a half-life of one day).

The minimum -maxmempool size is 10*-limitdescendantsize, as it is
easy for an attacker to play games with the cheapest
-limitdescendantsize transactions.

Note that this effectively disables high-priority transaction relay
iff the mempool becomes large."

As for your specific questions: Yes, the idea is that you can relay some cheap crap for a bit, driving up the min relay fee by the default min relay fee each time (which was always meant as a "this is what it costs to send a transaction around the network" constant, though it hasn't always done a good job of being accurate there).

The increased relay fee will effectively apply to low priority transactions, as they will be the package selected by the final TrimToSize call. Thus, priority-based relay will effectively remain enabled until people's mempools fill up.

@morcos

View changes

src/txmempool.cpp
+ while (!todo.empty()) {
+ const txiter& itnow = todo.front();
+ if (now.count(itnow))
+ continue;

This comment has been minimized.

@morcos

morcos Sep 26, 2015

Member

need to pop_front() before continuing, otherwise its an infinite loop

@morcos

morcos Sep 26, 2015

Member

need to pop_front() before continuing, otherwise its an infinite loop

This comment has been minimized.

@TheBlueMatt

TheBlueMatt Sep 26, 2015

Contributor

LOL, oops...

@TheBlueMatt

TheBlueMatt Sep 26, 2015

Contributor

LOL, oops...

@morcos

This comment has been minimized.

Show comment
Hide comment
@morcos

morcos Sep 26, 2015

Member

But in particular the increased relay fee does NOT apply to high priority txs? That's what I don't understand. It seems you could use the same stable of high priority inputs over and over to gain free relay.

Member

morcos commented Sep 26, 2015

But in particular the increased relay fee does NOT apply to high priority txs? That's what I don't understand. It seems you could use the same stable of high priority inputs over and over to gain free relay.

@TheBlueMatt

This comment has been minimized.

Show comment
Hide comment
@TheBlueMatt

TheBlueMatt Sep 26, 2015

Contributor

Hmm, indeed, there is an attack there where you can cause lots of relay for free there. You cant really get much into the mempool (only up to the max package size) and you do have to increase the feerate each time, but only by one satoshi per kb...

Contributor

TheBlueMatt commented Sep 26, 2015

Hmm, indeed, there is an attack there where you can cause lots of relay for free there. You cant really get much into the mempool (only up to the max package size) and you do have to increase the feerate each time, but only by one satoshi per kb...

@morcos

View changes

src/txmempool.cpp
+ break;
+ }
+ txiter rootit = mapTx.project<0>(it.base());
+ rootit--;

This comment has been minimized.

@morcos

morcos Sep 26, 2015

Member

this is a bug. rootit is an iterator by txid hash, so decrementing it puts you at a completely random transaction.
the base iterator needs to be decremented before projecting.

@sdaftuar and i didn't like this oddness, so the first commit in #6557 reverses the feerate sort. there was no reason to do it the other way in the first place. maybe you should just grab that?

@morcos

morcos Sep 26, 2015

Member

this is a bug. rootit is an iterator by txid hash, so decrementing it puts you at a completely random transaction.
the base iterator needs to be decremented before projecting.

@sdaftuar and i didn't like this oddness, so the first commit in #6557 reverses the feerate sort. there was no reason to do it the other way in the first place. maybe you should just grab that?

@morcos

View changes

src/init.cpp
+ int64_t nMempoolSizeLimit = GetArg("-maxmempool", DEFAULT_MAX_MEMPOOL_SIZE) * 1000000;
+ int64_t nMempoolDescendantSizeLimit = GetArg("-limitdescendantsize", DEFAULT_DESCENDANT_SIZE_LIMIT) * 1000;
+ if (nMempoolSizeLimit < 0 || nMempoolSizeLimit < nMempoolDescendantSizeLimit * 10)
+ return InitError(strprintf(_("Error: -maxmempool must be at least %d MB"), GetArg("-limitdescendantsize", DEFAULT_DESCENDANT_SIZE_LIMIT) / 100));

This comment has been minimized.

@morcos

morcos Sep 26, 2015

Member

Keep in mind this is a ratio of 2 different measurements. Serialized transaction size for descendant limit and mempool memory usage for maxmempool. There is about a 3x ratio between those measurements. So a 25MB mempool would actually only fit about 3 maximum sized packages... (I used 4x as a conservative ratio, and similarly wanted a 10x difference so ended up with 40x between the arguments.)

@morcos

morcos Sep 26, 2015

Member

Keep in mind this is a ratio of 2 different measurements. Serialized transaction size for descendant limit and mempool memory usage for maxmempool. There is about a 3x ratio between those measurements. So a 25MB mempool would actually only fit about 3 maximum sized packages... (I used 4x as a conservative ratio, and similarly wanted a 10x difference so ended up with 40x between the arguments.)

This comment has been minimized.

@TheBlueMatt

TheBlueMatt Sep 26, 2015

Contributor

Oops, yea, my notes to fix this from earlier were saying do something like 100MB, for this reason...Last time I ignore my notes and just do what I think when I'm sick :/

@TheBlueMatt

TheBlueMatt Sep 26, 2015

Contributor

Oops, yea, my notes to fix this from earlier were saying do something like 100MB, for this reason...Last time I ignore my notes and just do what I think when I'm sick :/

@NanoAkron

This comment has been minimized.

Show comment
Hide comment
@NanoAkron

NanoAkron Sep 26, 2015

What's wrong with XT's method of discarding a random transaction so that you can't predictably manipulate the mempool?

What's wrong with XT's method of discarding a random transaction so that you can't predictably manipulate the mempool?

@TheBlueMatt

This comment has been minimized.

Show comment
Hide comment
@TheBlueMatt

TheBlueMatt Sep 26, 2015

Contributor

@NanoAkron It makes it trivial to DoS the network, among many other issues.

Contributor

TheBlueMatt commented Sep 26, 2015

@NanoAkron It makes it trivial to DoS the network, among many other issues.

@morcos

View changes

src/txmempool.cpp
+
+ if (expsize <= sizelimit) {
+ RemoveStaged(stage);
+ trackRemovedOrAddFailed(bestFeeRateRemoved);

This comment has been minimized.

@morcos

morcos Sep 30, 2015

Member

These functions will be called every time through even if the mempool wasn't full to start with

@morcos

morcos Sep 30, 2015

Member

These functions will be called every time through even if the mempool wasn't full to start with

@sdaftuar

View changes

src/main.cpp
- // Require that free transactions have sufficient priority to be mined in the next block.
- if (GetBoolArg("-relaypriority", true) && nFees < ::minRelayTxFee.GetFee(nSize) && !AllowFree(view.GetPriority(tx, chainActive.Height() + 1))) {
+ CAmount mempoolRejectFee = pool.GetMinFee().GetFee(nSize);
+ if (mempoolRejectFee > 0 && nFees < ::minRelayTxFee.GetFee(nSize) + mempoolRejectFee) {

This comment has been minimized.

@sdaftuar

sdaftuar Sep 30, 2015

Member

With the way the half-life calculation works, I believe it would take a very long time before mempoolRejectFee will reach 0 again, after an eviction; this in turn would cause us to wait a really long time before being willing to relay low-fee transactions that have high priority. Perhaps the mempool could round the min fee it returns down to 0 at some point so that this doesn't take forever, or we can adjust the way we use it here to allow for the priority calculation to kick in even if the mempoolRejectFee isn't exactly 0?

@sdaftuar

sdaftuar Sep 30, 2015

Member

With the way the half-life calculation works, I believe it would take a very long time before mempoolRejectFee will reach 0 again, after an eviction; this in turn would cause us to wait a really long time before being willing to relay low-fee transactions that have high priority. Perhaps the mempool could round the min fee it returns down to 0 at some point so that this doesn't take forever, or we can adjust the way we use it here to allow for the priority calculation to kick in even if the mempoolRejectFee isn't exactly 0?

@sdaftuar

View changes

src/main.h
@@ -51,6 +51,8 @@ static const unsigned int DEFAULT_ANCESTOR_SIZE_LIMIT = 900;
static const unsigned int DEFAULT_DESCENDANT_LIMIT = 1000;
/** Default for -limitdescendantsize, maximum kilobytes of in-mempool descendants */
static const unsigned int DEFAULT_DESCENDANT_SIZE_LIMIT = 2500;
+/** Default for -maxmempool, maximum megabytes of mempool memory usage */
+static const unsigned int DEFAULT_MAX_MEMPOOL_SIZE = 100;

This comment has been minimized.

@sdaftuar

sdaftuar Sep 30, 2015

Member

I think it'd be better to make this default value as large as we think users can reasonably live with. 100MB of memory is only about 30MB of actual transactions, or 30 full blocks. It seems to me like all the attacks someone could do on a limited mempool involve trying to play games with the effects of eviction, so having a bigger default mempool just causes all attacks to scale up in cost to carry out, because an attacker has to generate more transactions just to trigger eviction.

#6557 has a 500MB default; if we're concerned that may be too big, how about 250 or 300MB?

@sdaftuar

sdaftuar Sep 30, 2015

Member

I think it'd be better to make this default value as large as we think users can reasonably live with. 100MB of memory is only about 30MB of actual transactions, or 30 full blocks. It seems to me like all the attacks someone could do on a limited mempool involve trying to play games with the effects of eviction, so having a bigger default mempool just causes all attacks to scale up in cost to carry out, because an attacker has to generate more transactions just to trigger eviction.

#6557 has a 500MB default; if we're concerned that may be too big, how about 250 or 300MB?

@morcos

This comment has been minimized.

Show comment
Hide comment
@morcos

morcos Sep 30, 2015

Member

I think the ROLLING_FEE_HALFLIFE should be 12 hours. Here's my analysis:
The purpose of the rollingMinimumFeeRate is to strike the right balance between two things.

  • Future transactions should be obligated to pay for the cost of transactions that were evicted (and their own relay fee) otherwise a large package of transactions could be evicted by a small tx with a slightly higher fee rate. This could happen repeatedly for a bandwidth attack.
  • It must decay so an attacker can not pack the mempool full of high fee txs one time and peg the effective min relay rate very high for a long time for the cost of stuffing the mempool once.

From the point of view of the bandwidth attack:
Assume the prevailing fee rate at the bottom of the mempool is X times the relay rate. Then a full size 2.5MB package can be evicted from there by paying X+1 on a small 200 byte tx. Effectively you have now paid the minimum relay fee on (200X + 200) bytes, but have relayed 2.5MB + 200 bytes, so you got free relay of 2.5MB - X * 200 bytes.

As soon as the rollingMinimumFeeRate has dropped from X back down to X-1, you can repeat the attack. At a half-life of 12 hours and assuming X = 20, then it'll take about 53 mins for that to happen. So you can free relay 47 kB per min. This seems sufficiently small compared to the bare minimum network relay capacity of 100 kB per min (1 block every 10 mins).

Since the decay is exponential, you'll actually take a lot longer than 53 mins to repeat the attack if the prevailing fee rate multiple X is considerably less than 20. However as the prevailing fee rate climbs the attack could be considered a bigger concern. This should be addressed by having a default minimum relay rate that is higher. It seams reasonable that over the long term the default minimum relay rate will not be much less than 1/20th of the prevailing fee rate at the bottom of mempools.

From the point of view of stuffing the mempool:
If we imagine a 100MB mempool, then filling it with 30MB of transactions (sizewise = 100MB of usage) at a 100K sat/KB fee rate will cost 30 BTC.

In this case access to the network will be blocked for all txs less than 100k feerate for 5 hours while those transactions are mined anyway. The additional gain the rollingMinimumFeeRate gives an attacker is another 7 hours until the decay has brought down the feerate to 50K.

Since the attacker could have stopped anything under 50K feerate anyway for 10 hours by just issuing 60MB worth of transactions at that fee rate. This attack is not significantly worse.

So I think 12 hours strikes about the right balance.

Member

morcos commented Sep 30, 2015

I think the ROLLING_FEE_HALFLIFE should be 12 hours. Here's my analysis:
The purpose of the rollingMinimumFeeRate is to strike the right balance between two things.

  • Future transactions should be obligated to pay for the cost of transactions that were evicted (and their own relay fee) otherwise a large package of transactions could be evicted by a small tx with a slightly higher fee rate. This could happen repeatedly for a bandwidth attack.
  • It must decay so an attacker can not pack the mempool full of high fee txs one time and peg the effective min relay rate very high for a long time for the cost of stuffing the mempool once.

From the point of view of the bandwidth attack:
Assume the prevailing fee rate at the bottom of the mempool is X times the relay rate. Then a full size 2.5MB package can be evicted from there by paying X+1 on a small 200 byte tx. Effectively you have now paid the minimum relay fee on (200X + 200) bytes, but have relayed 2.5MB + 200 bytes, so you got free relay of 2.5MB - X * 200 bytes.

As soon as the rollingMinimumFeeRate has dropped from X back down to X-1, you can repeat the attack. At a half-life of 12 hours and assuming X = 20, then it'll take about 53 mins for that to happen. So you can free relay 47 kB per min. This seems sufficiently small compared to the bare minimum network relay capacity of 100 kB per min (1 block every 10 mins).

Since the decay is exponential, you'll actually take a lot longer than 53 mins to repeat the attack if the prevailing fee rate multiple X is considerably less than 20. However as the prevailing fee rate climbs the attack could be considered a bigger concern. This should be addressed by having a default minimum relay rate that is higher. It seams reasonable that over the long term the default minimum relay rate will not be much less than 1/20th of the prevailing fee rate at the bottom of mempools.

From the point of view of stuffing the mempool:
If we imagine a 100MB mempool, then filling it with 30MB of transactions (sizewise = 100MB of usage) at a 100K sat/KB fee rate will cost 30 BTC.

In this case access to the network will be blocked for all txs less than 100k feerate for 5 hours while those transactions are mined anyway. The additional gain the rollingMinimumFeeRate gives an attacker is another 7 hours until the decay has brought down the feerate to 50K.

Since the attacker could have stopped anything under 50K feerate anyway for 10 hours by just issuing 60MB worth of transactions at that fee rate. This attack is not significantly worse.

So I think 12 hours strikes about the right balance.

@sdaftuar

View changes

src/txmempool.cpp
+ BOOST_FOREACH(const CTxIn& in, toadd.GetTx().vin)
+ protect.insert(in.prevout.hash);
+
+ size_t expsize = DynamicMemoryUsage() + toadd.DynamicMemoryUsage(); // Track the expected resulting memory usage of the mempool.

This comment has been minimized.

@sdaftuar

sdaftuar Sep 30, 2015

Member

I haven't thought about how much this is likely to matter but I don't think this is the best way to guess the expected size of the resulting mempool -- it misses the extra overhead from mapLinks, mapNextTx, and the multi_index pointers itself.

I think this code here is almost correct:
https://github.com/sdaftuar/bitcoin/blob/7008233767bd5e03521d96cde414394975e940d7/src/txmempool.cpp#L797

[There is an error though; the value of "9" that is used in the multi_index memory estimator should actually be a "6" I think in both DynamicMemoryUsage and GuessDynamicMemoryUsage.]

@sdaftuar

sdaftuar Sep 30, 2015

Member

I haven't thought about how much this is likely to matter but I don't think this is the best way to guess the expected size of the resulting mempool -- it misses the extra overhead from mapLinks, mapNextTx, and the multi_index pointers itself.

I think this code here is almost correct:
https://github.com/sdaftuar/bitcoin/blob/7008233767bd5e03521d96cde414394975e940d7/src/txmempool.cpp#L797

[There is an error though; the value of "9" that is used in the multi_index memory estimator should actually be a "6" I think in both DynamicMemoryUsage and GuessDynamicMemoryUsage.]

@sdaftuar

View changes

src/txmempool.cpp
+ setEntries stage;
+ std::set<uint256> protect;
+ BOOST_FOREACH(const CTxIn& in, toadd.GetTx().vin)
+ protect.insert(in.prevout.hash);

This comment has been minimized.

@sdaftuar

sdaftuar Sep 30, 2015

Member

If you change TrimToSize() to take as an argument the ancestors of the entry being considered (which is calculated earlier in AcceptToMemoryPool()), then you can get rid of protect, and instead just check that each package root that you consider isn't an ancestor of the entry being added. (This is what I did in #6557 and I think it helps make the code a lot simpler, especially combined with using CalculateDescendants() to grab all the descendants instead of writing a new loop here.)

@sdaftuar

sdaftuar Sep 30, 2015

Member

If you change TrimToSize() to take as an argument the ancestors of the entry being considered (which is calculated earlier in AcceptToMemoryPool()), then you can get rid of protect, and instead just check that each package root that you consider isn't an ancestor of the entry being added. (This is what I did in #6557 and I think it helps make the code a lot simpler, especially combined with using CalculateDescendants() to grab all the descendants instead of writing a new loop here.)

@jgarzik

This comment has been minimized.

Show comment
Hide comment
@jgarzik

jgarzik Sep 30, 2015

Contributor

concept ACK - prefer 24-48 hours - will do some testing

Contributor

jgarzik commented Sep 30, 2015

concept ACK - prefer 24-48 hours - will do some testing

@sdaftuar

This comment has been minimized.

Show comment
Hide comment
@sdaftuar

sdaftuar Oct 1, 2015

Member

Reorgs should probably be handled differently -- I don't think it makes sense for eviction to take place when calling AcceptToMemoryPool() from DisconnectBlock(); instead perhaps we can just let the mempool grow during a reorg and trim it down to size at the end?

Member

sdaftuar commented Oct 1, 2015

Reorgs should probably be handled differently -- I don't think it makes sense for eviction to take place when calling AcceptToMemoryPool() from DisconnectBlock(); instead perhaps we can just let the mempool grow during a reorg and trim it down to size at the end?

@TheBlueMatt

This comment has been minimized.

Show comment
Hide comment
@TheBlueMatt

TheBlueMatt Oct 1, 2015

Contributor

Addressed a few nits...Things left to do:

  • Figure out the decay constant/drop min fee to 0 when it gets near 0 (if we decide to push forward with this tomorrow, we should discuss this value)
  • Steal code from #6557 to use CalculateDescendants to make the TrimToSize code simpler
  • Write some basic sanity-check test cases
Contributor

TheBlueMatt commented Oct 1, 2015

Addressed a few nits...Things left to do:

  • Figure out the decay constant/drop min fee to 0 when it gets near 0 (if we decide to push forward with this tomorrow, we should discuss this value)
  • Steal code from #6557 to use CalculateDescendants to make the TrimToSize code simpler
  • Write some basic sanity-check test cases
@TheBlueMatt

This comment has been minimized.

Show comment
Hide comment
@TheBlueMatt

TheBlueMatt Oct 2, 2015

Contributor

Removed more than a third of the lines in TrimToSize and removed some other code in mempool sorting thanks to some suggestions from @morcos and @sdaftuar.

Contributor

TheBlueMatt commented Oct 2, 2015

Removed more than a third of the lines in TrimToSize and removed some other code in mempool sorting thanks to some suggestions from @morcos and @sdaftuar.

@TheBlueMatt

This comment has been minimized.

Show comment
Hide comment
@TheBlueMatt

TheBlueMatt Oct 2, 2015

Contributor

Halflife set to:
if mempool is < max_mempool_size / 4:
halflife = 3 hours
elif mempool < max_mempool_size / 2:
halflife = 6 hours
else
halflife = 12 hours.

When halflife is < minRelayTxFee (1000 satoshisPerKb), it is rounded down to 0 and free relay is re-enabled.

Contributor

TheBlueMatt commented Oct 2, 2015

Halflife set to:
if mempool is < max_mempool_size / 4:
halflife = 3 hours
elif mempool < max_mempool_size / 2:
halflife = 6 hours
else
halflife = 12 hours.

When halflife is < minRelayTxFee (1000 satoshisPerKb), it is rounded down to 0 and free relay is re-enabled.

@morcos

This comment has been minimized.

Show comment
Hide comment
@morcos

morcos Oct 2, 2015

Member

+1 on the half life suggestions.

Member

morcos commented Oct 2, 2015

+1 on the half life suggestions.

@TheBlueMatt

This comment has been minimized.

Show comment
Hide comment
@TheBlueMatt

TheBlueMatt Oct 2, 2015

Contributor

OK, did even better and solved an edge case (thanks again to @sdaftuar for suggestions) by just adding the new tx to the mempool first, and then calling TrimToSize blind and checking if the tx is still in mempool afterwards.

Also reverted the mempool sorting change after discussion with @morcos on IRC - though it is a win in the "optimize for maximum mempool feerate" metric, it seems better to leave it as is because it may result in a larger ending mempool.

Contributor

TheBlueMatt commented Oct 2, 2015

OK, did even better and solved an edge case (thanks again to @sdaftuar for suggestions) by just adding the new tx to the mempool first, and then calling TrimToSize blind and checking if the tx is still in mempool afterwards.

Also reverted the mempool sorting change after discussion with @morcos on IRC - though it is a win in the "optimize for maximum mempool feerate" metric, it seems better to leave it as is because it may result in a larger ending mempool.

@TheBlueMatt

This comment has been minimized.

Show comment
Hide comment
@TheBlueMatt

TheBlueMatt Oct 2, 2015

Contributor

Incorporated mempool expiry from @sipa, rebased and squashed. Should be reviewable/testable, but needs test cases.

Contributor

TheBlueMatt commented Oct 2, 2015

Incorporated mempool expiry from @sipa, rebased and squashed. Should be reviewable/testable, but needs test cases.

@petertodd

This comment has been minimized.

Show comment
Hide comment
@petertodd

petertodd Oct 3, 2015

Contributor

It'd be good to add some design doc comments explaining the intent of this code. CTxMemPool::GetMinFee() especially is quite mysterious and full of magic constants right now, which is easier to understand when you read @sdaftuar's comments, but that's much harder to discover if you're starting from the source code.

We also should add a way to get the current minimum relay fee from the RPC interface, e.g. through getmempoolinfo

Contributor

petertodd commented Oct 3, 2015

It'd be good to add some design doc comments explaining the intent of this code. CTxMemPool::GetMinFee() especially is quite mysterious and full of magic constants right now, which is easier to understand when you read @sdaftuar's comments, but that's much harder to discover if you're starting from the source code.

We also should add a way to get the current minimum relay fee from the RPC interface, e.g. through getmempoolinfo

@petertodd

This comment has been minimized.

Show comment
Hide comment
@petertodd

petertodd Oct 3, 2015

Contributor

Code looks reasonable so far, though I haven't looked into it in enough detail to give a utACK just yet.

Contributor

petertodd commented Oct 3, 2015

Code looks reasonable so far, though I haven't looked into it in enough detail to give a utACK just yet.

@morcos

This comment has been minimized.

Show comment
Hide comment
@morcos

morcos Oct 5, 2015

Member

@TheBlueMatt I was just talking with @sdaftuar and now we think the max is required for the sort. I know you reverted back to max, but I just wanted to memorialize that it is actually necessary. Otherwise, it might possible to purposefully construct packages which will cause a parent to sort down and get evicted, allowing an attacker to control evicting a particular tx.

Member

morcos commented Oct 5, 2015

@TheBlueMatt I was just talking with @sdaftuar and now we think the max is required for the sort. I know you reverted back to max, but I just wanted to memorialize that it is actually necessary. Otherwise, it might possible to purposefully construct packages which will cause a parent to sort down and get evicted, allowing an attacker to control evicting a particular tx.

src/main.cpp
@@ -954,6 +954,17 @@ bool AcceptToMemoryPool(CTxMemPool& pool, CValidationState &state, const CTransa
// Store transaction in memory
pool.addUnchecked(hash, entry, setAncestors, !IsInitialBlockDownload());
+
+ // trim mempool and check if tx is was trimmed

This comment has been minimized.

@sdaftuar

sdaftuar Oct 6, 2015

Member

"is was" -> "was"

@sdaftuar

sdaftuar Oct 6, 2015

Member

"is was" -> "was"

src/txmempool.cpp
+ rollingMinimumFeeRate = rollingMinimumFeeRate / pow(2.0, (time - lastRollingFeeUpdate) / halflife);
+ lastRollingFeeUpdate = time;
+
+ if (rollingMinimumFeeRate < ::minRelayTxFee.GetFeePerK())

This comment has been minimized.

@sdaftuar

sdaftuar Oct 6, 2015

Member

Would it perhaps be better to pass the minRelayTxFee in, so that we're not needing to access globals inside the mempool?

@sdaftuar

sdaftuar Oct 6, 2015

Member

Would it perhaps be better to pass the minRelayTxFee in, so that we're not needing to access globals inside the mempool?

This comment has been minimized.

@sdaftuar

sdaftuar Oct 6, 2015

Member

On further thought -- would it make more sense to just move this code out of the mempool and into main.cpp, to isolate the mempool from relay policy? We could make TrimToSize() return the fee rate of the last package it removes, and then leave AcceptToMemoryPool() responsible for deciding what to do with the prevailing relay fee after eviction (including this logic for decaying things back down).

@sdaftuar

sdaftuar Oct 6, 2015

Member

On further thought -- would it make more sense to just move this code out of the mempool and into main.cpp, to isolate the mempool from relay policy? We could make TrimToSize() return the fee rate of the last package it removes, and then leave AcceptToMemoryPool() responsible for deciding what to do with the prevailing relay fee after eviction (including this logic for decaying things back down).

This comment has been minimized.

@TheBlueMatt

TheBlueMatt Oct 6, 2015

Contributor

I see GetMinFee() as a "minimum feerate this mempool reasonably accepts" not a part of your relay policy. You can tweak your relay policy by having a bigger mempool. Someone who wants to refactor all of the relay policy to be separated, later, can do so, but that seems far out-of-scope for this pull.

@TheBlueMatt

TheBlueMatt Oct 6, 2015

Contributor

I see GetMinFee() as a "minimum feerate this mempool reasonably accepts" not a part of your relay policy. You can tweak your relay policy by having a bigger mempool. Someone who wants to refactor all of the relay policy to be separated, later, can do so, but that seems far out-of-scope for this pull.

@paveljanik

This comment has been minimized.

Show comment
Hide comment
@paveljanik

paveljanik Oct 6, 2015

Contributor
In file included from wallet/wallet.cpp:24:
./txmempool.h:291:25: warning: in-class initializer for static data member of type 'const double' is a GNU extension [-Wgnu-static-float-init]
    static const double ROLLING_FEE_HALFLIFE = 60 * 60 * 12;
                        ^                      ~~~~~~~~~~~~
1 warning generated.
Contributor

paveljanik commented Oct 6, 2015

In file included from wallet/wallet.cpp:24:
./txmempool.h:291:25: warning: in-class initializer for static data member of type 'const double' is a GNU extension [-Wgnu-static-float-init]
    static const double ROLLING_FEE_HALFLIFE = 60 * 60 * 12;
                        ^                      ~~~~~~~~~~~~
1 warning generated.
src/txmempool.cpp
+ setEntries stage;
+ CalculateDescendants(mapTx.project<0>(it), stage);
+ RemoveStaged(stage);
+ trackPackageRemoved(CFeeRate(it->GetFeesWithDescendants(), it->GetSizeWithDescendants()));

This comment has been minimized.

@morcos

morcos Oct 6, 2015

Member

This seems like it has two problems. First, the descendant package information will have been updated by the removal of all the descendants in RemoveStaged. More importantly, won't the iterator be invalid once it has been erased?

@morcos

morcos Oct 6, 2015

Member

This seems like it has two problems. First, the descendant package information will have been updated by the removal of all the descendants in RemoveStaged. More importantly, won't the iterator be invalid once it has been erased?

@TheBlueMatt

This comment has been minimized.

Show comment
Hide comment
@TheBlueMatt

TheBlueMatt Oct 6, 2015

Contributor

Comments should all be addressed.

Contributor

TheBlueMatt commented Oct 6, 2015

Comments should all be addressed.

@morcos

This comment has been minimized.

Show comment
Hide comment
@morcos

morcos Oct 17, 2015

Member

I've done some benchmarking. I ran a historical simulation over 1 million transactions between July 6th and July 14th. And I repeated this for 3 different code bases.

  • master as of 7/31 (before multindex and package tracking)
  • master as of 10/16
  • this pull

Results are below, I think you can see that the average is within measurement error. The slight increase in the median is to be expected.
All times in ms.

Txns accepted to the mempool pre-packages master 10/16 6722
Average 0.786 0.790 0.782
Median 0.268 0.279 0.298
90th percentile 1.04 0.91 0.93
99th percentile 8.03 7.91 7.71
99.9th percentile 45.8 45.7 43.1
max 286 286 398

I broke down the timing a little bit more and it's clear that the vast majority of the outliers come from CheckInputs and HaveInputs. For instance if you look at the 1k transactions between the 99.8th and 99.9th percentile. (EDIT: oops posted slightly wrong stats the first time, conclusion the same)

AcceptToMemoryPool time in ms
Total 41.4
CheckInputs 32.5
HaveInputs 7.1
remaining 1.8

Although I did see 3 (out of 1M) transactions where the addUnchecked call took > 50ms in this pull.

These measurements were made with libsecp256k1 merged and default dbcache size.
6722 used a 100 MB mempool limit.

Member

morcos commented Oct 17, 2015

I've done some benchmarking. I ran a historical simulation over 1 million transactions between July 6th and July 14th. And I repeated this for 3 different code bases.

  • master as of 7/31 (before multindex and package tracking)
  • master as of 10/16
  • this pull

Results are below, I think you can see that the average is within measurement error. The slight increase in the median is to be expected.
All times in ms.

Txns accepted to the mempool pre-packages master 10/16 6722
Average 0.786 0.790 0.782
Median 0.268 0.279 0.298
90th percentile 1.04 0.91 0.93
99th percentile 8.03 7.91 7.71
99.9th percentile 45.8 45.7 43.1
max 286 286 398

I broke down the timing a little bit more and it's clear that the vast majority of the outliers come from CheckInputs and HaveInputs. For instance if you look at the 1k transactions between the 99.8th and 99.9th percentile. (EDIT: oops posted slightly wrong stats the first time, conclusion the same)

AcceptToMemoryPool time in ms
Total 41.4
CheckInputs 32.5
HaveInputs 7.1
remaining 1.8

Although I did see 3 (out of 1M) transactions where the addUnchecked call took > 50ms in this pull.

These measurements were made with libsecp256k1 merged and default dbcache size.
6722 used a 100 MB mempool limit.

@sipa

This comment has been minimized.

Show comment
Hide comment
@sipa

sipa Oct 17, 2015

Member
Member

sipa commented Oct 17, 2015

@rubensayshi

This comment has been minimized.

Show comment
Hide comment
@rubensayshi

rubensayshi Oct 19, 2015

Contributor

utACK

Contributor

rubensayshi commented Oct 19, 2015

utACK

@sipa

This comment has been minimized.

Show comment
Hide comment
@sipa

sipa Oct 19, 2015

Member

ACK.

Member

sipa commented Oct 19, 2015

ACK.

if (fLimitFree && nFees < txMinFee)
return state.DoS(0, false, REJECT_INSUFFICIENTFEE, "insufficient fee", false,
strprintf("%d < %d", nFees, txMinFee));
- // Require that free transactions have sufficient priority to be mined in the next block.
- if (GetBoolArg("-relaypriority", true) && nFees < ::minRelayTxFee.GetFee(nSize) && !AllowFree(view.GetPriority(tx, chainActive.Height() + 1))) {
+ CAmount mempoolRejectFee = pool.GetMinFee(GetArg("-maxmempool", DEFAULT_MAX_MEMPOOL_SIZE) * 1000000).GetFee(nSize);

This comment has been minimized.

@jtimon

jtimon Oct 20, 2015

Member

I would prefer that this new GetArg("-maxmempool", DEFAULT_MAX_MEMPOOL_SIZE) global was initialized only in one place: init.cpp

@jtimon

jtimon Oct 20, 2015

Member

I would prefer that this new GetArg("-maxmempool", DEFAULT_MAX_MEMPOOL_SIZE) global was initialized only in one place: init.cpp

This comment has been minimized.

@sipa

sipa Oct 20, 2015

Member
@sipa

sipa via email Oct 20, 2015

Member

This comment has been minimized.

@TheBlueMatt

TheBlueMatt Oct 20, 2015

Contributor

Seems like an easy thing to do in a separate PR.

@TheBlueMatt

TheBlueMatt Oct 20, 2015

Contributor

Seems like an easy thing to do in a separate PR.

+
+ // trim mempool and check if tx was trimmed
+ if (!fOverrideMempoolLimit) {
+ int expired = pool.Expire(GetTime() - GetArg("-mempoolexpiry", DEFAULT_MEMPOOL_EXPIRY) * 60 * 60);

This comment has been minimized.

@jtimon

jtimon Oct 20, 2015

Member

GetArg("-mempoolexpiry", DEFAULT_MEMPOOL_EXPIRY) could also be initialized in init.cpp

@jtimon

jtimon Oct 20, 2015

Member

GetArg("-mempoolexpiry", DEFAULT_MEMPOOL_EXPIRY) could also be initialized in init.cpp

+/** Default for -maxmempool, maximum megabytes of mempool memory usage */
+static const unsigned int DEFAULT_MAX_MEMPOOL_SIZE = 300;
+/** Default for -mempoolexpiry, expiration time for mempool transactions in hours */
+static const unsigned int DEFAULT_MEMPOOL_EXPIRY = 72;

This comment has been minimized.

@jtimon

jtimon Oct 20, 2015

Member

Maybe these belong to txmempool.h ?

@jtimon

jtimon Oct 20, 2015

Member

Maybe these belong to txmempool.h ?

This comment has been minimized.

@sipa

sipa Oct 20, 2015

Member
@sipa

sipa via email Oct 20, 2015

Member

This comment has been minimized.

@jtimon

jtimon Oct 20, 2015

Member

The logic is in txmempool already, these are just default values for a couple of new policy globals.

@jtimon

jtimon Oct 20, 2015

Member

The logic is in txmempool already, these are just default values for a couple of new policy globals.

This comment has been minimized.

@sipa

sipa Oct 20, 2015

Member
@sipa

sipa via email Oct 20, 2015

Member

This comment has been minimized.

@jtimon

jtimon Oct 20, 2015

Member

Well, my general approach is not moving more policy to main. I had encapsulated policy in txmempool (including all the uses of global minTxRelayFee) , decoupled policy/fee from txmempool (done in master thanks to @morcos ) and also decoupled txmempool from policy/fee once. Of course it doesn't make sense to redo that work while waiting for one of these mempool limit PRs to be merged since they are clearly going to break that effort again.

What about moving them to policy/policy.h instead of putting them on main?

@jtimon

jtimon Oct 20, 2015

Member

Well, my general approach is not moving more policy to main. I had encapsulated policy in txmempool (including all the uses of global minTxRelayFee) , decoupled policy/fee from txmempool (done in master thanks to @morcos ) and also decoupled txmempool from policy/fee once. Of course it doesn't make sense to redo that work while waiting for one of these mempool limit PRs to be merged since they are clearly going to break that effort again.

What about moving them to policy/policy.h instead of putting them on main?

This comment has been minimized.

@TheBlueMatt

TheBlueMatt Oct 20, 2015

Contributor

Can we do this in a separate PR? It seems a bit late to be bikeshedding on where to put constants.

@TheBlueMatt

TheBlueMatt Oct 20, 2015

Contributor

Can we do this in a separate PR? It seems a bit late to be bikeshedding on where to put constants.

@sipa sipa referenced this pull request Oct 20, 2015

Closed

Drop minRelayTxFee to 1000 #6860

@laanwj laanwj added the Mempool label Oct 20, 2015

@ABISprotocol

This comment has been minimized.

Show comment
Hide comment
@ABISprotocol

ABISprotocol Oct 20, 2015

Since #6557 was closed (in favor of opening this pull request, according to @sdaftuar from a comment in #6557) I am again coming back to evaluate whether this new pull request does the following:

a) addresses my concerns as expressed in comments here and here in #6201 ~ these comments related to the ability of people in the developing world to access bitcoin and the problems inherent with taking a bitcoin-development approach that would not include the bulk of people in the developing and underdeveloped world. (See full comments for details.)

b) Has code that effectively affirms the principle that "mempool limiting and dynamic fee determination are superior to a static parameter change"

c) Incorporates a floating relay fee, such as this sipa@6498673 or its equivalent.

Reasoning for this request:
See history on
#6201
#6455
#6470
#6557
(history of closed pull requests leading to this one (#6722) shown above)

Since #6557 was closed (in favor of opening this pull request, according to @sdaftuar from a comment in #6557) I am again coming back to evaluate whether this new pull request does the following:

a) addresses my concerns as expressed in comments here and here in #6201 ~ these comments related to the ability of people in the developing world to access bitcoin and the problems inherent with taking a bitcoin-development approach that would not include the bulk of people in the developing and underdeveloped world. (See full comments for details.)

b) Has code that effectively affirms the principle that "mempool limiting and dynamic fee determination are superior to a static parameter change"

c) Incorporates a floating relay fee, such as this sipa@6498673 or its equivalent.

Reasoning for this request:
See history on
#6201
#6455
#6470
#6557
(history of closed pull requests leading to this one (#6722) shown above)

@TheBlueMatt

This comment has been minimized.

Show comment
Hide comment
@TheBlueMatt

TheBlueMatt Oct 20, 2015

Contributor

@ABISprotocol I'm not actually sure if you're asking a question or what, but I think the answer is "yes". Yes, it does the things in (b) and (c), I guess? As for what you're talking about in (a), I have no idea. Please make specific criticisms.

Contributor

TheBlueMatt commented Oct 20, 2015

@ABISprotocol I'm not actually sure if you're asking a question or what, but I think the answer is "yes". Yes, it does the things in (b) and (c), I guess? As for what you're talking about in (a), I have no idea. Please make specific criticisms.

@ABISprotocol

This comment has been minimized.

Show comment
Hide comment
@ABISprotocol

ABISprotocol Oct 20, 2015

@TheBlueMatt I have been very specific since #6201 when I raised these issues, the pull requests since then which have led to this one have in part been responsive to the issues I raised, but have been closed.

When you say "I think the answer is "yes" (...) it does the things in (b) and (c), I guess?" I would appreciate a better answer, where you cite how it does so. If you believe that it does the things in (b) and (c) please cite a commit / change as part of this pull request that would do either (b) or (c) and describe in layman's terms how it would do so.

With respect to (a), please see the comments cited in (a) for details, as suggested in my prior comment. The issues raised in my comments remain a serious and valid concern.

@morcos @laanwj

@TheBlueMatt I have been very specific since #6201 when I raised these issues, the pull requests since then which have led to this one have in part been responsive to the issues I raised, but have been closed.

When you say "I think the answer is "yes" (...) it does the things in (b) and (c), I guess?" I would appreciate a better answer, where you cite how it does so. If you believe that it does the things in (b) and (c) please cite a commit / change as part of this pull request that would do either (b) or (c) and describe in layman's terms how it would do so.

With respect to (a), please see the comments cited in (a) for details, as suggested in my prior comment. The issues raised in my comments remain a serious and valid concern.

@morcos @laanwj

@sipa

This comment has been minimized.

Show comment
Hide comment
@sipa

sipa Oct 20, 2015

Member

@ABISprotocol This page is for discussing technical issues. Please take the philosophical considerations elsewhere.

Member

sipa commented Oct 20, 2015

@ABISprotocol This page is for discussing technical issues. Please take the philosophical considerations elsewhere.

@ABISprotocol

This comment has been minimized.

Show comment
Hide comment
@ABISprotocol

ABISprotocol Oct 20, 2015

@sipa I believe I have raised substantial technical issues in my past and present comments. I think it is unfair of you to attempt to diminish my participation. Instead, please let me know if the issues I have raised have been addressed, and if so, please cite a basic message as to how they have been addressed. Thank you for your consideration of my comments.

@sipa I believe I have raised substantial technical issues in my past and present comments. I think it is unfair of you to attempt to diminish my participation. Instead, please let me know if the issues I have raised have been addressed, and if so, please cite a basic message as to how they have been addressed. Thank you for your consideration of my comments.

@sipa

This comment has been minimized.

Show comment
Hide comment
@sipa

sipa Oct 20, 2015

Member

(a) If you're talking about accessibility of on blockchain transactions: no. We can't guarantee that every possible useful transaction will have a negligable fee for every person on earth. If that was the case, DoS attacks that intervene with everyone's ability to use it would also be negligable in cost for every person on earth. This is a philosophical question, and not something that changes in this pull request. Miners are already incentivized to choose the transactions that grant them the highest profits, and this PR merely extends that behaviour to the mempool.

(b) Yes, see (c)

(c) Yes, read the title of this pull request please.

You're very welcome to discuss these issues, but not here as I don't think they are related to this pull request. This is about dealing active problems on the network in line of existing behaviour.

Member

sipa commented Oct 20, 2015

(a) If you're talking about accessibility of on blockchain transactions: no. We can't guarantee that every possible useful transaction will have a negligable fee for every person on earth. If that was the case, DoS attacks that intervene with everyone's ability to use it would also be negligable in cost for every person on earth. This is a philosophical question, and not something that changes in this pull request. Miners are already incentivized to choose the transactions that grant them the highest profits, and this PR merely extends that behaviour to the mempool.

(b) Yes, see (c)

(c) Yes, read the title of this pull request please.

You're very welcome to discuss these issues, but not here as I don't think they are related to this pull request. This is about dealing active problems on the network in line of existing behaviour.

@ABISprotocol

This comment has been minimized.

Show comment
Hide comment
@ABISprotocol

ABISprotocol Oct 20, 2015

@sipa Your remarks regarding (c) are dismissive, "read title of this pull request please" assumes stupidity of the commenter(s), namely myself, and is not a kind way to address my participation. I will assume someone else will better answer my concerns. @laanwj

@sipa Your remarks regarding (c) are dismissive, "read title of this pull request please" assumes stupidity of the commenter(s), namely myself, and is not a kind way to address my participation. I will assume someone else will better answer my concerns. @laanwj

@laanwj

This comment has been minimized.

Show comment
Hide comment
@laanwj

laanwj Oct 21, 2015

Member

utACK

Member

laanwj commented Oct 21, 2015

utACK

@laanwj laanwj merged commit 58254aa into bitcoin:master Oct 21, 2015

1 check passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details

laanwj added a commit that referenced this pull request Oct 21, 2015

Merge pull request #6722
58254aa Fix stale comment in CTxMemPool::TrimToSize. (Matt Corallo)
2bc5018 Fix comment formatting tabs (Matt Corallo)
8abe0f5 Undo GetMinFee-requires-extra-call-to-hit-0 (Matt Corallo)
9e93640 Drop minRelayTxFee to 1000 (Matt Corallo)
074cb15 Add reasonable test case for mempool trimming (Matt Corallo)
d355cf4 Only call TrimToSize once per reorg/blocks disconnect (Matt Corallo)
794a8ce Implement on-the-fly mempool size limitation. (Matt Corallo)
e6c7b36 Print mempool size in KB when adding txn (Matt Corallo)
241d607 Add CFeeRate += operator (Matt Corallo)
e8bcdce Track (and define) ::minRelayTxFee in CTxMemPool (Matt Corallo)
9c9b66f Fix calling mempool directly, instead of pool, in ATMP (Matt Corallo)
49b6fd5 Add Mempool Expire function to remove old transactions (Pieter Wuille)
78b82f4 Reverse the sort on the mempool's feerate index (Suhas Daftuar)
@dcousens

This comment has been minimized.

Show comment
Hide comment
@dcousens

dcousens Oct 21, 2015

Contributor

@ABISprotocol what specific question do you want to ask?

I'll try to answer, paraphrasing you:

[does this PR affect the] ability of people in the developing world to access [to] bitcoin?

If you classify access as the ability to run a full node, then, this PR, which will allow users to adjust the software to meet the capabilities of their hardware, does increase access.

b) Has code that effectively affirms the principle that "mempool limiting and dynamic fee determination are superior to a static parameter change"

If there isn't spam, why should we maintain the higher static parameter?
My understanding of this algorithm could be summarised as:

sort mempool transactions by fee in descending order, then filter/reduce the resultant collection such that when the maximum memory size is reached, drop any remaining transactions

The implementation isn't so straightforward due to complications that arise when you account for CPFP and descendants, but, that is the base concept AFAIK.
@TheBlueMatt would that be correct?

Contributor

dcousens commented Oct 21, 2015

@ABISprotocol what specific question do you want to ask?

I'll try to answer, paraphrasing you:

[does this PR affect the] ability of people in the developing world to access [to] bitcoin?

If you classify access as the ability to run a full node, then, this PR, which will allow users to adjust the software to meet the capabilities of their hardware, does increase access.

b) Has code that effectively affirms the principle that "mempool limiting and dynamic fee determination are superior to a static parameter change"

If there isn't spam, why should we maintain the higher static parameter?
My understanding of this algorithm could be summarised as:

sort mempool transactions by fee in descending order, then filter/reduce the resultant collection such that when the maximum memory size is reached, drop any remaining transactions

The implementation isn't so straightforward due to complications that arise when you account for CPFP and descendants, but, that is the base concept AFAIK.
@TheBlueMatt would that be correct?

@TheBlueMatt TheBlueMatt deleted the TheBlueMatt:mempoollimit branch Oct 21, 2015

@ABISprotocol

This comment has been minimized.

Show comment
Hide comment
@ABISprotocol

ABISprotocol Oct 22, 2015

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

Hello,

By access, I didn't mean necessarily the ability to run a full node.
I was more concerned with the ability of users to make a transaction
at all (regardless of what software they might be using) and not be
squeezed out by the upward creep of fees.

See, for example, this, cited previously here in (a), which discusses
the issue in more detail:
#6201 (comment)

As stated there, "In observing this sad trend of gradual fee increases
and what I see as censorship of small transactions, in a year's time,
given what happened in 2013 with #2577 and what is now happening with
this issue here in 2015, it is entirely likely that further
transaction and fee policies will be adopted which will edge out even
those who are trying to make BTC transactions equivalent to 0.20 USD.
Sharp currency declines (in the USD, euro, other currencies) and
increases in value of BTC would create situations in which one might
need to purchase small quantities of BTC, but paradoxically such
policies as those proposed in this pull request might stymie
entry-level buyers in the marketplace. In addition, the potential for
microgiving in bitcoin is reduced by these kind of development
proposals, and microgiving is one of the most significant developments
to come to finance. It is one that cannot be adequately implemented by
legacy systems in no small part due to their burdensome fees, which up
to this point, bitcoin has not had. However, this appears to be
changing rapidly.

As a consequence, a large number of people in the developing and
underdeveloped world will be edged out by policies created by people
who create and develop this new economic system without consideration
of the voices of those who are least likely to be heard here. This
implies that the billions who potentially could have been helped by
this technology, now, will not."

This is the concern which focuses on access, and it has to do with
people being driven out as fees go up and up. Development direction
should thus ultimately orient itself towards finding a way to support
both on-chain and off-chain micro-transactions. How these are defined
is important as well because it is certain that a micro-transaction in
the context of bitcoin which can be supported by the network. As
@gmaxwell has pointed out,
#6201 (comment)
"It's important to be specific in what you're talking about when you
say microtransactions. In some contexts it means value transfers under
"$10" in others, under "$1" in others under "$0.01" and in yet other
under "$0.00001". There is some level under which just simply cannot
be supported: because a single attack at moderate cost could saturate
the bandwidth of a substantial portion of the network (keep in mind
Bitcoin is a broadcast system, and any system that can't keep up can't
participate)."

Note here that there are a large number of persons in the world
getting by on the equivalent of 1 to 2 USD per day if salaried. At one
time I lived abroad for several years for less than fifty USD per
month (and for a period of time lived in the USA with much less than
that). This is much of the world. These are statements of fact which
cannot be ignored and which are as relevant to the discussion as
subsidy, cost of mining, and other vital factors. The trend of upward
cost of transacting in the bitcoin network is not going to reverse if
the status quo continues, but developers do have a choice in how they
proceed right now and moving forward.

Thus the inclusion or exclusion of persons in the developing world
when it comes to the bitcoin network and access is not an issue which
can be dismissed, nor can developers suggest that these points are not
technical enough and must be discussed elsewhere, because the
substance of the pull requests mentioned in this thread directly
impact whether or not whether billions of people in the developing
world (and in particular those who might only be making, at most, a
few dollars per day) will be able to transact in the bitcoin system,
or whether they will ultimately be excluded from it completely as the
use of it spreads.

So, does this PR affect the ability of people in the developing world
to access bitcoin, in that context? I would submit that this PR,
while it does provide the ability to run a full node, does not
increase access substantially within the context of the issues raised
above (see also section (a) in my original comment to this pull
request), and thus I would submit that there is more to be done. I
will not assume that off-chain solutions are the only way to address
these issues, as we see from looking at BlockCypher's on-chain
microtransaction API for values between 2,000 satoshis ($0.005) and
4,000,000 satoshis (
$9.50).
http://dev.blockcypher.com/#microtransaction-api I encourage you to
look at my own project for some ideas as well: http://abis.io

Daniel Cousens:

@ABISprotocol what specific question do you want to ask?

I'll try to answer, paraphrasing you:

[does this PR affect the] ability of people in the developing
world to access [to] bitcoin?

If you classify access as the ability to run a full node, then,
this PR, which will allow users to adjust the software to meet the
capabilities of their hardware, does increase access.

b) Has code that effectively affirms the principle that "mempool
limiting and dynamic fee determination are superior to a static
parameter change"

If there isn't spam, why should we maintain the higher static
parameter? My understanding of this algorithm could be summarised
algorithmically as:

sort mempool transactions by fee in descending order, then
filter/reduce the resultant collection such that when the maximum
memory size is reached, drop all remaining transactions

The implementation isn't so straightforward, but, conceptually,
AFAIK. @TheBlueMatt would that be correct?

--- Reply to this email directly or view it on GitHub:
#6722 (comment)


http://abis.io ~
"a protocol concept to enable decentralization
and expansion of a giving economy, and a new social good"
https://keybase.io/odinn
-----BEGIN PGP SIGNATURE-----

iQEcBAEBCgAGBQJWKGXZAAoJEGxwq/inSG8CJgsIAKWrlYNfqQp2NT4/q+R22MaD
1B/5MUrTndjhqbP2zcNTESwzbsUwZdgBljGwTRHlKy+eg8JpuFQn//e5wNqJT6UK
1KEDzWzXpoqCqgNeJHeBP7uSMb+9VUs/sV5D1Cgey6Kl/Ss9gJ1fhvvodBBkxK55
zStpgw7+MYQ4ZNxh+2a/txT/aG7quWq64KlMA/dfUT4sXPsJCxnwUcTVJh6fkPM1
smAWMyHWvz9M6WBxQYeDhJ/rjLXSP36D6Tjf0j/Y9/ehJf+wAucB9o7E0J6T4NjP
7xy6hCKr40iyefB3RwwfGZK6aVAcIkeC6p8gP/GKJVrCVV8LMyVT0NyPzKtUPD0=
=oyDy
-----END PGP SIGNATURE-----

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

Hello,

By access, I didn't mean necessarily the ability to run a full node.
I was more concerned with the ability of users to make a transaction
at all (regardless of what software they might be using) and not be
squeezed out by the upward creep of fees.

See, for example, this, cited previously here in (a), which discusses
the issue in more detail:
#6201 (comment)

As stated there, "In observing this sad trend of gradual fee increases
and what I see as censorship of small transactions, in a year's time,
given what happened in 2013 with #2577 and what is now happening with
this issue here in 2015, it is entirely likely that further
transaction and fee policies will be adopted which will edge out even
those who are trying to make BTC transactions equivalent to 0.20 USD.
Sharp currency declines (in the USD, euro, other currencies) and
increases in value of BTC would create situations in which one might
need to purchase small quantities of BTC, but paradoxically such
policies as those proposed in this pull request might stymie
entry-level buyers in the marketplace. In addition, the potential for
microgiving in bitcoin is reduced by these kind of development
proposals, and microgiving is one of the most significant developments
to come to finance. It is one that cannot be adequately implemented by
legacy systems in no small part due to their burdensome fees, which up
to this point, bitcoin has not had. However, this appears to be
changing rapidly.

As a consequence, a large number of people in the developing and
underdeveloped world will be edged out by policies created by people
who create and develop this new economic system without consideration
of the voices of those who are least likely to be heard here. This
implies that the billions who potentially could have been helped by
this technology, now, will not."

This is the concern which focuses on access, and it has to do with
people being driven out as fees go up and up. Development direction
should thus ultimately orient itself towards finding a way to support
both on-chain and off-chain micro-transactions. How these are defined
is important as well because it is certain that a micro-transaction in
the context of bitcoin which can be supported by the network. As
@gmaxwell has pointed out,
#6201 (comment)
"It's important to be specific in what you're talking about when you
say microtransactions. In some contexts it means value transfers under
"$10" in others, under "$1" in others under "$0.01" and in yet other
under "$0.00001". There is some level under which just simply cannot
be supported: because a single attack at moderate cost could saturate
the bandwidth of a substantial portion of the network (keep in mind
Bitcoin is a broadcast system, and any system that can't keep up can't
participate)."

Note here that there are a large number of persons in the world
getting by on the equivalent of 1 to 2 USD per day if salaried. At one
time I lived abroad for several years for less than fifty USD per
month (and for a period of time lived in the USA with much less than
that). This is much of the world. These are statements of fact which
cannot be ignored and which are as relevant to the discussion as
subsidy, cost of mining, and other vital factors. The trend of upward
cost of transacting in the bitcoin network is not going to reverse if
the status quo continues, but developers do have a choice in how they
proceed right now and moving forward.

Thus the inclusion or exclusion of persons in the developing world
when it comes to the bitcoin network and access is not an issue which
can be dismissed, nor can developers suggest that these points are not
technical enough and must be discussed elsewhere, because the
substance of the pull requests mentioned in this thread directly
impact whether or not whether billions of people in the developing
world (and in particular those who might only be making, at most, a
few dollars per day) will be able to transact in the bitcoin system,
or whether they will ultimately be excluded from it completely as the
use of it spreads.

So, does this PR affect the ability of people in the developing world
to access bitcoin, in that context? I would submit that this PR,
while it does provide the ability to run a full node, does not
increase access substantially within the context of the issues raised
above (see also section (a) in my original comment to this pull
request), and thus I would submit that there is more to be done. I
will not assume that off-chain solutions are the only way to address
these issues, as we see from looking at BlockCypher's on-chain
microtransaction API for values between 2,000 satoshis ($0.005) and
4,000,000 satoshis (
$9.50).
http://dev.blockcypher.com/#microtransaction-api I encourage you to
look at my own project for some ideas as well: http://abis.io

Daniel Cousens:

@ABISprotocol what specific question do you want to ask?

I'll try to answer, paraphrasing you:

[does this PR affect the] ability of people in the developing
world to access [to] bitcoin?

If you classify access as the ability to run a full node, then,
this PR, which will allow users to adjust the software to meet the
capabilities of their hardware, does increase access.

b) Has code that effectively affirms the principle that "mempool
limiting and dynamic fee determination are superior to a static
parameter change"

If there isn't spam, why should we maintain the higher static
parameter? My understanding of this algorithm could be summarised
algorithmically as:

sort mempool transactions by fee in descending order, then
filter/reduce the resultant collection such that when the maximum
memory size is reached, drop all remaining transactions

The implementation isn't so straightforward, but, conceptually,
AFAIK. @TheBlueMatt would that be correct?

--- Reply to this email directly or view it on GitHub:
#6722 (comment)


http://abis.io ~
"a protocol concept to enable decentralization
and expansion of a giving economy, and a new social good"
https://keybase.io/odinn
-----BEGIN PGP SIGNATURE-----

iQEcBAEBCgAGBQJWKGXZAAoJEGxwq/inSG8CJgsIAKWrlYNfqQp2NT4/q+R22MaD
1B/5MUrTndjhqbP2zcNTESwzbsUwZdgBljGwTRHlKy+eg8JpuFQn//e5wNqJT6UK
1KEDzWzXpoqCqgNeJHeBP7uSMb+9VUs/sV5D1Cgey6Kl/Ss9gJ1fhvvodBBkxK55
zStpgw7+MYQ4ZNxh+2a/txT/aG7quWq64KlMA/dfUT4sXPsJCxnwUcTVJh6fkPM1
smAWMyHWvz9M6WBxQYeDhJ/rjLXSP36D6Tjf0j/Y9/ehJf+wAucB9o7E0J6T4NjP
7xy6hCKr40iyefB3RwwfGZK6aVAcIkeC6p8gP/GKJVrCVV8LMyVT0NyPzKtUPD0=
=oyDy
-----END PGP SIGNATURE-----

nTransactionsUpdated(0)
{
+ clear();

This comment has been minimized.

@jonasschnelli

jonasschnelli Oct 26, 2015

Member

This change produces a crash on osx.

jonasschnelli$ ./src/bitcoind --regtest
libc++abi.dylib: terminating with uncaught exception of type boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::lock_error> >: boost: mutex lock failed in pthread_mutex_lock: Invalid argument

Stacktrace goes back to cxx_global_var_initXX().
I think calling LOCK() from global var init (CTxMemPool mempool(::minRelayTxFee);) through this new clear() (which LOCKS mempool) is problematic.

@jonasschnelli

jonasschnelli Oct 26, 2015

Member

This change produces a crash on osx.

jonasschnelli$ ./src/bitcoind --regtest
libc++abi.dylib: terminating with uncaught exception of type boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::lock_error> >: boost: mutex lock failed in pthread_mutex_lock: Invalid argument

Stacktrace goes back to cxx_global_var_initXX().
I think calling LOCK() from global var init (CTxMemPool mempool(::minRelayTxFee);) through this new clear() (which LOCKS mempool) is problematic.

@rubensayshi

This comment has been minimized.

Show comment
Hide comment
@rubensayshi

rubensayshi Nov 18, 2015

Contributor

this is going to be in v0.12.0?
it's not in the release-notes yet for v0.12.0?

Contributor

rubensayshi commented Nov 18, 2015

this is going to be in v0.12.0?
it's not in the release-notes yet for v0.12.0?

@TheBlueMatt

This comment has been minimized.

Show comment
Hide comment
@TheBlueMatt

TheBlueMatt Nov 20, 2015

Contributor

Yes, this should be added to the release-notes.

Contributor

TheBlueMatt commented Nov 20, 2015

Yes, this should be added to the release-notes.

@NicolasDorier NicolasDorier referenced this pull request in ElementsProject/lightning Nov 27, 2015

Closed

Never produce dust outputs #14

@defuse defuse referenced this pull request in zcash/zcash Aug 16, 2016

Open

Merge upstream anti DoS patches #1251

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment