Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce -maxuploadtarget #6622

Merged
merged 2 commits into from Oct 26, 2015
Merged

Conversation

@jonasschnelli
Copy link
Member

jonasschnelli commented Sep 2, 2015

This is the first PR in a planed series of bandwidth / DOS prevention PRs.
Focus for now is a simple and not over-complex solution to start with.

The -maxuploadtarget (in MiB) is a configuration value that make bitcoind try to limit the total outbound traffic. Currently there is no guarantee that the limit target will be fulfilled.

If the target-in-bytes - (time-left-in-24-cycle) / 600 * MAX_BLOCK_SIZE is reached, stop serve blocks older than one week, stop serve filtered blocks (SPV) immediately.

The timeframe for the measurement is currently fixed to 24h.

This is a effective method of reducing traffic and might prevent node operators from getting expensive "traffic exceed"-bills.

Currently the limit also take effect for whitebinded peers.

Over getnettotals one can get some upload limit statistics:

./src/bitcoin-cli getnettotals
{
  "totalbytesrecv": 0,
  "totalbytessent": 0,
  "timemillis": 1441222000173,
  "uploadtarget": {
    "timeframe": 86400,
    "target": 300000000,
    "target_reached": false,
    "serve_historical_blocks": true,
    "bytes_left_in_cycle": 300000000,
    "time_left_in_cycle": 86400
  }
}

needs documentation
needs unit/rpc tests

@jonasschnelli jonasschnelli force-pushed the jonasschnelli:2015/09/maxuploadtarget branch 3 times, most recently Sep 2, 2015
@casey
Copy link
Contributor

casey commented Sep 3, 2015

-maxuploadtarget is in MB (1000000), not MiB (1048576), not currently documented, but just in case it does get documented.

If a user wants to control the total amount transferred, then I don't think MB per day is a good unit to use. If they have a transfer cap, then it is most likely tied to a monthly billing cycle, so MB/month is likely better. (For example, I think Comcast in the US has a 250GB/month cap, so a Comcast customer might like to allow 100GB/month, or something like that.)

@MarcoFalke
Copy link
Member

MarcoFalke commented Sep 3, 2015

MB/month is likely better

Which could result in all of the 100GB being eaten up on the first few days of the timeframe. Maybe the owner restarts the server mid-month and then serves 200GB of historic blocks that month.

@laanwj
Copy link
Member

laanwj commented Sep 3, 2015

Which could result in all of the 100GB being eaten up on the first few days of the timeframe. Maybe the owner restarts the server mid-month and then serves 200GB of historic blocks that month.

Once combined with connection throttling, which would make the speed that the 100GB can be 'eaten up' configurable, (which @cfields is working on ) it's less of an issue.

@laanwj laanwj added the P2P label Sep 3, 2015
@jonasschnelli
Copy link
Member Author

jonasschnelli commented Sep 3, 2015

-maxuploadtarget is in MB (1000000), not MiB (1048576), not currently documented, but just in case it does get documented.

@casey Thanks. Will fix it.

Not sure about limit per month or per day.
Per month can be difficult because not every month has the same amount of days (30/31/27/28). Also it's unclear how one could get "in sync" with providers month rhythm (must not be first to last day of month). Though, it still would allow to use a daily measurement cycle (break down the month limit to days, keep focus on a day limit).

For now i think keep the day-limit does make things more easy and controllable.

@luke-jr
Copy link
Member

luke-jr commented Sep 3, 2015

I need some kind of goal set in kB/second, as I am often forced to shutdown my node for reliable phone calls. :(

@ghost
Copy link

ghost commented Sep 4, 2015

Agree with @luke-jr here, we think about available bandwidth in terms of kB/sec rather than MB/day because that's how it's always reported to us - adverts from ISPs, speed checking websites, P2P torrenting apps etc.

@jonasschnelli jonasschnelli force-pushed the jonasschnelli:2015/09/maxuploadtarget branch 2 times, most recently Sep 4, 2015
@jonasschnelli
Copy link
Member Author

jonasschnelli commented Sep 4, 2015

@NanoAkron: not sure if @luke-jr is for a bandwidth kB/sec limit.
What is the most annoying thing that can happen when you download something over torrent? Probably if you connected and download from a throttled peer.

Bandwidth throttled peers is not something we should not encourage within bitcoin-core. If a node-operator likes to limit the bandwidth, he can use different tools. What we really should do is reducing bandwidth without impair the network quality and speed. My stats showed me that most traffic is done outbound and most of the outbound traffic is consumed by serving historical blocks (help other nodes to initial sync a new chain).

This PR can reduce bandwidth significant while not impair the network quality.
An initial sync for a new peer is not something that is very time critical.

There was a discussion on #bitcoin-dev about the -maxuploadtarget (gmaxwell, wumpus, me) http://bitcoinstats.com/irc/bitcoin-dev/logs/2015/09/02#l1441180902.0)

@MarcoFalke
Copy link
Member

MarcoFalke commented Sep 4, 2015

download from a throttled [torrent] peer

I don't think you can compare torrent to bitcoin-core serving blocks... Using torrent you are happy to get any transmission speed because it is parallel download anyway.

On the contrary, bitcoin-core is doing moving-window download (c.f. #4468). So you always want to fetch the blocks from peers which can upload to you not much slower than your download speed?

This PR makes the full node to disconnect from initial-download-peers instead of serving them at lower speed, which sounds like the better solution to me.

@jonasschnelli
Copy link
Member Author

jonasschnelli commented Sep 4, 2015

This PR makes the full node to disconnect from initial-download-peers instead of serving them at lower speed, which sounds like the better solution to me.

Two questions:

  • What is the benefit for a peer in initial sync-state when it gets served with throttled bandwidth (lets assume 10kb/s) in comparison to a disconnect?
  • What is the benefit for a bandwidth limited peer to serve historical blocks with limited bandwidth rather then do a disconnect?
@sipa
Copy link
Member

sipa commented Sep 4, 2015

@MarcoFalke
Copy link
Member

MarcoFalke commented Sep 4, 2015

@jonasschnelli To answer your two questions: No, I don't see a benefit. My wording was confusing, so to be clear: I am supporting this PR/commit. (Just as @sipa said, throttling won't help as it results in disconnect anyway)

@luke-jr
Copy link
Member

luke-jr commented Sep 4, 2015

Having an "instant" bandwidth limit to work with can enable Core to make intelligent-ish decisions, like not inv'ing/uploading new blocks to more than one peer at a time unless the current-upload peer is downloading at a slow rate.

@ghost
Copy link

ghost commented Sep 5, 2015

Surely we need some sort of scaling to our bandwidth limitations/throttling here - serving historic blocks to unsynced peers could 'trickle-feed' in the background at a few 10s of kB/sec, and anything within the last 60 blocks or so could then be served with greater priority.

This would need the unsynced peer to know with certainty the true current block height so that it didn't drop slow inbound connections, because this could be a new attack vector.

@gmaxwell
Copy link
Contributor

gmaxwell commented Sep 6, 2015

@casey of course, any longer window cap can be divided down. Breaking it up into smaller groups has a side effect of allowing us more freedom to ignore the resource scheduling issue.

Imagine that a low monthly cap were widely deployed on the network. Then the whole network might blow through its cap in a few days at the start of the month, and be unwilling to serve blocks for the rest of the month! The daily limit (especially with the arbitrary start point) is less of an issue.

@luke-jr Your phone call issue is largely orthogonal to what this is addressing. Good behavior with respect to buffer bloat and general rate restriction are another matter from overall usage. Those need to be address too, but in other PRs. :)

@gmaxwell
Copy link
Contributor

gmaxwell commented Sep 7, 2015

With the above mentioned comparison bug fixed, I can confirm that the basic functionality works once the limit is crossed: new blocks are relayed without issue, old blocks result in disconnect.

@jonasschnelli jonasschnelli force-pushed the jonasschnelli:2015/09/maxuploadtarget branch 3 times, most recently Sep 7, 2015
@jonasschnelli
Copy link
Member Author

jonasschnelli commented Sep 7, 2015

@gmaxwell: Thanks for the review!
Updated the time check (block older than a week) and did some testing. Seems to work like this.

@sipa
sipa reviewed Sep 7, 2015
View changes
src/init.cpp Outdated
@@ -335,6 +335,7 @@ std::string HelpMessage(HelpMessageMode mode)
strUsage += HelpMessageOpt("-whitebind=<addr>", _("Bind to given address and whitelist peers connecting to it. Use [host]:port notation for IPv6"));
strUsage += HelpMessageOpt("-whitelist=<netmask>", _("Whitelist peers connecting from the given netmask or IP address. Can be specified multiple times.") +
" " + _("Whitelisted peers cannot be DoS banned and their transactions are always relayed, even if they are already in the mempool, useful e.g. for a gateway"));
strUsage += HelpMessageOpt("-maxuploadtarget=<n>", strprintf(_("Tries to keep outbound traffic under the given target, 0 = no limit (default: %d)"), 0));

This comment has been minimized.

Copy link
@sipa

sipa Sep 7, 2015

Member

Specify units (bytes? megabytes?) and window size (seconds, days?)

@gmaxwell
Copy link
Contributor

gmaxwell commented Sep 7, 2015

might want to add a test so that it won't drop the last connection? I'm not sure but if you imagine e.g. someone using -connect and having several peers down already? -- then again is the peer really useful if its fetching old blocks?

@jonasschnelli
Copy link
Member Author

jonasschnelli commented Sep 7, 2015

Not sure how useful a peer with height < now-1week is. But it could catch up and then be helpful. Thought, I think we should respect the uploadtarget in the first place.

Excluding -whitebind looks more important to me (could be implemented in a follow up PR).

@gmaxwell
Copy link
Contributor

gmaxwell commented Sep 7, 2015

With respect to forcing a minimum on the argument. Perhaps it should just whine in the logs if you've asked for a value that is too small? One consequence of the current setup is that every restart forces another 144MB of history transfer. (I removed that code in my own copy because it makes the software much easier to test.)

@jonasschnelli jonasschnelli force-pushed the jonasschnelli:2015/09/maxuploadtarget branch Sep 8, 2015
@mikehearn
Copy link
Contributor

mikehearn commented Sep 21, 2015

OK, so firstly, do we agree on this statement

This patch solves the problem of users with capped transfer quotas, probably measured in gigabytes per month. The other patch we're discussing on the XT repository solves the problem of inference with consumer services at home due to saturating the limited uplink bandwidth.

Because it seems to me that these are two independent problems that are both being called 'throttling', and the two approaches can actually be combined, or rather, would not interfere if both were implemented as-is.

Tor has a hibernate mode that this sounds very similar to. The way it works is essentially shutting down your node when you go over the transfer quota. Similarly, this patch is equivalent to shutting down part of the node when quota is exhausted, i.e. the part that provides useful services to other peers.

It might be worth separating the two terms. We could call this one hibernation, and the other patch throttling, as that'd align terminology with Tor and "throttling" in network engineering normally means restricting the instantaneous bandwidth usage rather than disconnecting after reaching a daily quota.

@jonasschnelli
Copy link
Member Author

jonasschnelli commented Sep 21, 2015

Real "throttling" (reducing the bandwidth to a static value) regardless of which node and which command/service being called, is bad and ineffective IMO.
If you are connected to a throttled tor node or a throttled node in a bittorent net, it will just result in a bad overall user experience.

Bitcoin p2p is different to tor and bittorrent. It does not serve or consume uncontrollable and ungroupable/hidden data-streams (tor), neither it serves high bandwidth data streams over a longer period of time (torrent).

Careful and logically rejecting services if a certain level of consumption is reached, is a way better approach than just trottle the overall network activity of bitcoin.
The later can be done already with some network wrappers/tools and i strongly recommend against adding a overall "dumb" network bandwidth limiter to core (or XT).

Because SPV clients can produce uncontrollable amount of traffic (BF FPR) without any usage for the p2p network (they don't pass data to other nodes, they just consume), it makes sense to stop that service if network capacity gets critical.

IMO: the "throttling"/"capping"/"limiting" topic is very important in terms of "scaling bitcoin" and it should get the required attention.

@mikehearn
Copy link
Contributor

mikehearn commented Sep 21, 2015

Hm, so we don't agree on that statement then?

When a peer is uploading the block chain to freshly installed nodes/nodes that were offline for a long time, it does look quite a bit like BitTorrent ..... heavy upload traffic that goes as fast as possible for a long period of time.

The goal of the bandwidth throttler is not to have huge swings in available serving bandwidth, but rather to avoid delaying/blocking the TCP ACKs being used by other services like Netflix and causing user visible disruption. Ideally, a user would contribute most of their uplink if they aren't actually using it (which is most people most of the time), so the difference between unthrottled and throttled wouldn't actually be very large.

As streaming video can be disrupted by any kind of upload, the most important criteria is that the throttle actually works. How a peer schedules bandwidth users into that limited window is a different problem entirely.

@jonasschnelli
Copy link
Member Author

jonasschnelli commented Sep 21, 2015

When a peer is uploading the block chain to freshly installed nodes/nodes that were offline for a long time, it does look quite a bit like BitTorrent ..... heavy upload traffic that goes as fast as possible for a long period of time.

Agreed. That IBD serving is similar to torrent. This is the reason why this PR addresses that part if the upload target has been reached.

The rest of your points are QoS topics. IMO this is a router job (or a lower level OS net stack thing). Reducing bandwidth by a throttling feature would reduce than bandwidth regardless of what else is going on on the system. It would also reduce bandwidth – which would be available – if the User is creating a backup or writing a letter in Word.

@mikehearn
Copy link
Contributor

mikehearn commented Sep 21, 2015

Well, the issue is people want to configure it at the app level as many consumer wifi routers don't offer the right features, or people don't know how to use them.

@btcdrak
btcdrak reviewed Oct 21, 2015
View changes
src/net.cpp Outdated
if (buffer >= nMaxOutboundLimit || nMaxOutboundTotalBytesSentInCycle >= nMaxOutboundLimit - buffer)
{
return true;
}

This comment has been minimized.

Copy link
@btcdrak

btcdrak Oct 21, 2015

Contributor

Braces not required.

@btcdrak
Copy link
Contributor

btcdrak commented Oct 21, 2015

Looks good to me. The OP said there is some documentation + tests to complete.

@laanwj
Copy link
Member

laanwj commented Oct 21, 2015

utACK

@laanwj
laanwj reviewed Oct 22, 2015
View changes
src/net.cpp Outdated
nMaxOutboundLimit = limit;

if (limit < recommendedMinimum)
LogPrintf("Max outbound target very small (%s) and are very unlikely to be reached, recommended minimum is %s\n", nMaxOutboundLimit, recommendedMinimum);

This comment has been minimized.

Copy link
@laanwj

laanwj Oct 22, 2015

Member

I don't understand this message. The target is too small, thus unlikely to be reached? Isn't it the other way around?

This comment has been minimized.

Copy link
@jonasschnelli

jonasschnelli Oct 22, 2015

Author Member

What about just using "Warning: max outbound target is very small, recommended minimum is %s"?

This comment has been minimized.

Copy link
@MarcoFalke

MarcoFalke Oct 22, 2015

Member

@jonasschnelli I think you can still print nMaxOutboundLimit, just remove or clarify the unlikely to be reached as no one will understand this as "likely to overshoot".

This comment has been minimized.

Copy link
@gmaxwell

gmaxwell Oct 26, 2015

Contributor

Unlike to be reached sounds very much like it will use less bandwidth than specified; so I think this should be changed to "will be overshot" or "very likely to be exceeded." or similar.

@gmaxwell
Copy link
Contributor

gmaxwell commented Oct 26, 2015

@jonasschnelli Interest in performing a squash+rebase+message nits? I'd like to move to merge this.

@laanwj
Copy link
Member

laanwj commented Oct 26, 2015

Agree @gmaxwell

@jonasschnelli
Copy link
Member Author

jonasschnelli commented Oct 26, 2015

I'm currently rebasing and fixing @sdaftuars rpc test. Have plans to fix everything until tmr.

@jonasschnelli jonasschnelli force-pushed the jonasschnelli:2015/09/maxuploadtarget branch Oct 26, 2015
* -maxuploadtarget can be set in MiB
* if <limit> - ( time-left-in-24h-cycle / 600 * MAX_BLOCK_SIZE ) has reach, stop serve blocks older than one week and filtered blocks
* no action if limit has reached, no guarantee that the target will not be  surpassed
* add outbound limit informations to rpc getnettotals
@jonasschnelli jonasschnelli force-pushed the jonasschnelli:2015/09/maxuploadtarget branch Oct 26, 2015
@jonasschnelli jonasschnelli force-pushed the jonasschnelli:2015/09/maxuploadtarget branch to 17a073a Oct 26, 2015
@jonasschnelli
Copy link
Member Author

jonasschnelli commented Oct 26, 2015

Rebased. Added @sdaftuar rpc test. Addressed nits. Passes travis. Ready for merge.

@laanwj laanwj merged commit 17a073a into bitcoin:master Oct 26, 2015
1 check passed
1 check passed
continuous-integration/travis-ci/pr The Travis CI build passed
Details
laanwj added a commit that referenced this pull request Oct 26, 2015
17a073a Add RPC test for -maxuploadtarget (Suhas Daftuar)
872fee3 Introduce -maxuploadtarget (Jonas Schnelli)
outboundLimit.push_back(Pair("serve_historical_blocks", !CNode::OutboundTargetReached(true)));
outboundLimit.push_back(Pair("bytes_left_in_cycle", CNode::GetOutboundTargetBytesLeft()));
outboundLimit.push_back(Pair("time_left_in_cycle", CNode::GetMaxOutboundTimeLeftInCycle()));
obj.push_back(Pair("uploadtarget", outboundLimit));

This comment has been minimized.

Copy link
@MarcoFalke

MarcoFalke Nov 6, 2015

Member

You'd need to update help getnettotals as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

You can’t perform that action at this time.