Relay blocks when pruning #6148

Merged
merged 2 commits into from Sep 23, 2015

Conversation

Projects
None yet
9 participants
@sdaftuar
Member

sdaftuar commented May 16, 2015

This is an alternate to #6122, built off of #6130. With #6130, it should be safe to always inv the tip regardless of what height our peer is at, because we'll only inv blocks that we actually have.

As noted in the discussion of #6122, 0.10 and later peers would still not be able to download a reorg from a pruning peer; they would only be able to receive new blocks that build on their tip (0.9 and earlier peers using getblocks would be able to download reorgs from pruning nodes).

ping @cozz

@cozz

This comment has been minimized.

Show comment
Hide comment
@cozz

cozz May 16, 2015

Contributor

Fine with me. Reorg is still a problem, but relaying the tip is at least better than relaying nothing.

To solve the reorg-problem, I would want to also call FindNextBlocksToDownload on pruned-nodes, if it is ensured that we can not ask them for pruned blocks.
@sipa Would this be possible, to just call FindNextBlocksToDownload also for pruned nodes, when we are close to being synced?

Contributor

cozz commented May 16, 2015

Fine with me. Reorg is still a problem, but relaying the tip is at least better than relaying nothing.

To solve the reorg-problem, I would want to also call FindNextBlocksToDownload on pruned-nodes, if it is ensured that we can not ask them for pruned blocks.
@sipa Would this be possible, to just call FindNextBlocksToDownload also for pruned nodes, when we are close to being synced?

@morcos

This comment has been minimized.

Show comment
Hide comment
@morcos

morcos May 17, 2015

Member

ACK. Tested for tip relay for master and reorg relay (up to a certain depth) for 0.9.

Member

morcos commented May 17, 2015

ACK. Tested for tip relay for master and reorg relay (up to a certain depth) for 0.9.

@jonasschnelli

This comment has been minimized.

Show comment
Hide comment
@jonasschnelli

jonasschnelli May 20, 2015

Member

I just sopped my pruned node, added this PR on top of the current master, ran again and had the following issue:

Stopping:

Prune: target=550MiB actual=467MiB diff=82MiB min_must_keep=356959 removed 0 blk/rev pairs
receive version message: /bitcoinseeder:0.01/: version 60000, blocks=230000, us=***:8334, peer=18138
receive version message: /Snoopy:0.1/: version 60001, blocks=0, us=***:8334, peer=18139
receive version message: /bitcoinseeder:0.01/: version 60000, blocks=230000, us=***:8334, peer=18140
receive version message: /bitcoinseeder:0.01/: version 60000, blocks=230000, us=***:8334, peer=18141
receive version message: /getaddr.bitnodes.io:0.1/: version 70002, blocks=357246, us=***:8334, peer=18142
receive version message: /getaddr.bitnodes.io:0.1/: version 70002, blocks=357246, us=***:8334, peer=18143
receive version message: /bitcoinseeder:0.01/: version 60000, blocks=230000, us=***:8334, peer=18144
ERROR: AcceptToMemoryPool: nonstandard transaction: dust
receive version message: /breadwallet:0.5.1/: version 70002, blocks=0, us=***:8334, peer=18145
Added time data, samples 200, offset -1 (+0 minutes)
receive version message: /Snoopy:0.1/: version 60001, blocks=0, us=***:8334, peer=18146
receive version message: /getaddr.bitnodes.io:0.1/: version 70002, blocks=357247, us=****:8334, peer=18147
receive version message: /bitcoinseeder:0.01/: version 60000, blocks=230000, us=***:8334, peer=18148
^Cnet thread interrupt
msghand thread interrupt
addcon thread interrupt
opencon thread interrupt
dumpaddr thread stop
Shutdown: In progress...
RPCAcceptHandler: Error: Operation canceled
RPCAcceptHandler: Error: Operation canceled
StopNode()
Prune: target=550MiB actual=467MiB diff=82MiB min_must_keep=356959 removed 0 blk/rev pairs
Shutdown: done

Restarting (after serval seconds/minutes):

AppInit2 : parameter interaction: -prune -> setting -disablewallet=1
Prune configured to target 550MiB on disk for block and undo files.
---snip---
Bitcoin version v0.10.99.0-4b30ade (2015-05-20 11:45:07 +0200)
Using OpenSSL version OpenSSL 1.0.1e 11 Feb 2013
Using BerkeleyDB version Berkeley DB 4.8.30: (April  9, 2010)
Default data directory ***
Using data directory ***
Using config file ***/bitcoin.conf
Using at most 125 connections (1024 file descriptors available)
Using 8 threads for script verification
scheduler thread start
Allowing RPC connections from: 127.0.0.0/255.0.0.0 ::1/ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff
Binding RPC on address ::1 port *** (IPv4+IPv6 bind any: 0)
Binding RPC on address 127.0.0.1 port *** (IPv4+IPv6 bind any: 0)
Bound to [::]:8334
Bound to 0.0.0.0:8334
Cache configuration:
* Using 2.0MiB for block index database
* Using 32.5MiB for chain state database
* Using 65.5MiB for in-memory UTXO set
init message: Loading block index...  
Opening LevelDB in ***/blocks/index
Opened LevelDB successfully
Opening LevelDB in ***/chainstate
Opened LevelDB successfully
LoadBlockIndexDB: last block file = 271
LoadBlockIndexDB: last block file info: CBlockFileInfo(blocks=70, size=30105947, heights=357178...357247, time=2015-05-19...2015-05-20)
Checking all blk files are present... 
Unable to open file ***/blocks/blk00264.dat
: Error loading block database.

Do you want to rebuild the block database now?
: Error loading block database.

Do you want to rebuild the block database now?
Aborted block database rebuild. Exiting.
Shutdown: In progress...

I doubt that it has something to do with this PR but it could lead to another existing issue.

Member

jonasschnelli commented May 20, 2015

I just sopped my pruned node, added this PR on top of the current master, ran again and had the following issue:

Stopping:

Prune: target=550MiB actual=467MiB diff=82MiB min_must_keep=356959 removed 0 blk/rev pairs
receive version message: /bitcoinseeder:0.01/: version 60000, blocks=230000, us=***:8334, peer=18138
receive version message: /Snoopy:0.1/: version 60001, blocks=0, us=***:8334, peer=18139
receive version message: /bitcoinseeder:0.01/: version 60000, blocks=230000, us=***:8334, peer=18140
receive version message: /bitcoinseeder:0.01/: version 60000, blocks=230000, us=***:8334, peer=18141
receive version message: /getaddr.bitnodes.io:0.1/: version 70002, blocks=357246, us=***:8334, peer=18142
receive version message: /getaddr.bitnodes.io:0.1/: version 70002, blocks=357246, us=***:8334, peer=18143
receive version message: /bitcoinseeder:0.01/: version 60000, blocks=230000, us=***:8334, peer=18144
ERROR: AcceptToMemoryPool: nonstandard transaction: dust
receive version message: /breadwallet:0.5.1/: version 70002, blocks=0, us=***:8334, peer=18145
Added time data, samples 200, offset -1 (+0 minutes)
receive version message: /Snoopy:0.1/: version 60001, blocks=0, us=***:8334, peer=18146
receive version message: /getaddr.bitnodes.io:0.1/: version 70002, blocks=357247, us=****:8334, peer=18147
receive version message: /bitcoinseeder:0.01/: version 60000, blocks=230000, us=***:8334, peer=18148
^Cnet thread interrupt
msghand thread interrupt
addcon thread interrupt
opencon thread interrupt
dumpaddr thread stop
Shutdown: In progress...
RPCAcceptHandler: Error: Operation canceled
RPCAcceptHandler: Error: Operation canceled
StopNode()
Prune: target=550MiB actual=467MiB diff=82MiB min_must_keep=356959 removed 0 blk/rev pairs
Shutdown: done

Restarting (after serval seconds/minutes):

AppInit2 : parameter interaction: -prune -> setting -disablewallet=1
Prune configured to target 550MiB on disk for block and undo files.
---snip---
Bitcoin version v0.10.99.0-4b30ade (2015-05-20 11:45:07 +0200)
Using OpenSSL version OpenSSL 1.0.1e 11 Feb 2013
Using BerkeleyDB version Berkeley DB 4.8.30: (April  9, 2010)
Default data directory ***
Using data directory ***
Using config file ***/bitcoin.conf
Using at most 125 connections (1024 file descriptors available)
Using 8 threads for script verification
scheduler thread start
Allowing RPC connections from: 127.0.0.0/255.0.0.0 ::1/ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff
Binding RPC on address ::1 port *** (IPv4+IPv6 bind any: 0)
Binding RPC on address 127.0.0.1 port *** (IPv4+IPv6 bind any: 0)
Bound to [::]:8334
Bound to 0.0.0.0:8334
Cache configuration:
* Using 2.0MiB for block index database
* Using 32.5MiB for chain state database
* Using 65.5MiB for in-memory UTXO set
init message: Loading block index...  
Opening LevelDB in ***/blocks/index
Opened LevelDB successfully
Opening LevelDB in ***/chainstate
Opened LevelDB successfully
LoadBlockIndexDB: last block file = 271
LoadBlockIndexDB: last block file info: CBlockFileInfo(blocks=70, size=30105947, heights=357178...357247, time=2015-05-19...2015-05-20)
Checking all blk files are present... 
Unable to open file ***/blocks/blk00264.dat
: Error loading block database.

Do you want to rebuild the block database now?
: Error loading block database.

Do you want to rebuild the block database now?
Aborted block database rebuild. Exiting.
Shutdown: In progress...

I doubt that it has something to do with this PR but it could lead to another existing issue.

@jonasschnelli

This comment has been minimized.

Show comment
Hide comment
@jonasschnelli

jonasschnelli May 20, 2015

Member

My /blocks dir contains the following:

blk00268.dat  blk00269.dat  blk00270.dat  blk00271.dat  index  rev00268.dat  rev00269.dat  rev00270.dat  rev00271.dat
Member

jonasschnelli commented May 20, 2015

My /blocks dir contains the following:

blk00268.dat  blk00269.dat  blk00270.dat  blk00271.dat  index  rev00268.dat  rev00269.dat  rev00270.dat  rev00271.dat
@jonasschnelli

This comment has been minimized.

Show comment
Hide comment
@jonasschnelli

jonasschnelli May 20, 2015

Member

Further up i can see in my log that blk00264.dat has been deleted because of pruning (around 9 days ago).

receive version message: /btccrawler.ch:v0.1/: version 70002, blocks=355967, us=***:8334, peer=729
ERROR: AcceptToMemoryPool: nonstandard transaction: dust
Prune: target=550MiB actual=532MiB diff=17MiB min_must_keep=355679 removed 0 blk/rev pairs
Prune: target=550MiB actual=388MiB diff=161MiB min_must_keep=355679 removed 1 blk/rev pairs
Prune: UnlinkPrunedFiles deleted blk/rev (00264)
UpdateTip: new best=00000000000000000cbfa2566c81e0bc78f37b66e35ff3657d552154ca9436ca  height=355968  log2_work=82.765851  tx=68384633  date=2015-05-11 20:16:36 progress=1.
00000  cache=0
Member

jonasschnelli commented May 20, 2015

Further up i can see in my log that blk00264.dat has been deleted because of pruning (around 9 days ago).

receive version message: /btccrawler.ch:v0.1/: version 70002, blocks=355967, us=***:8334, peer=729
ERROR: AcceptToMemoryPool: nonstandard transaction: dust
Prune: target=550MiB actual=532MiB diff=17MiB min_must_keep=355679 removed 0 blk/rev pairs
Prune: target=550MiB actual=388MiB diff=161MiB min_must_keep=355679 removed 1 blk/rev pairs
Prune: UnlinkPrunedFiles deleted blk/rev (00264)
UpdateTip: new best=00000000000000000cbfa2566c81e0bc78f37b66e35ff3657d552154ca9436ca  height=355968  log2_work=82.765851  tx=68384633  date=2015-05-11 20:16:36 progress=1.
00000  cache=0
@sdaftuar

This comment has been minimized.

Show comment
Hide comment
@sdaftuar

sdaftuar May 20, 2015

Member

@jonasschnelli any chance you had been running with #6118 and then switched to running without? This looks exactly like what happens when trying to downgrade from lazy updating of mapBlockIndex. If that is the issue you can workaround by restarting your node with #6118 and then quickly stopping; the block index is refreshed on startup. You could then restart without the lazy updating code and things should work fine.

Member

sdaftuar commented May 20, 2015

@jonasschnelli any chance you had been running with #6118 and then switched to running without? This looks exactly like what happens when trying to downgrade from lazy updating of mapBlockIndex. If that is the issue you can workaround by restarting your node with #6118 and then quickly stopping; the block index is refreshed on startup. You could then restart without the lazy updating code and things should work fine.

@jonasschnelli

This comment has been minimized.

Show comment
Hide comment
@jonasschnelli

jonasschnelli May 20, 2015

Member

@sdaftuar pretty sure i did this. I'll now try to run a master-full-node, sync up, add this PR on top and continue to confirm the missing-lazy-update issue.

Member

jonasschnelli commented May 20, 2015

@sdaftuar pretty sure i did this. I'll now try to run a master-full-node, sync up, add this PR on top and continue to confirm the missing-lazy-update issue.

@jonasschnelli

This comment has been minimized.

Show comment
Hide comment
@jonasschnelli

jonasschnelli May 20, 2015

Member

Stopped a full-node, copied datadir, ran a new master-full-node with the just copied datadir and pruning target 550, stopped after a while, added this PR on top and it looks good.

Now testing block relying.

Member

jonasschnelli commented May 20, 2015

Stopped a full-node, copied datadir, ran a new master-full-node with the just copied datadir and pruning target 550, stopped after a while, added this PR on top and it looks good.

Now testing block relying.

@jonasschnelli

This comment has been minimized.

Show comment
Hide comment
@jonasschnelli

jonasschnelli May 21, 2015

Member

Did some testing.
Connected a fresh mainnet node to a pruned node.
I was expecting that my fresh node can load at least the whole headers-first-chain but encountered:

--- snip
sending: pong (8 bytes) peer=1
received: getheaders (997 bytes) peer=1
getheaders -1 to 0000000000000000000000000000000000000000000000000000000000000000 from peer=1
sending: headers (1 bytes) peer=1
received: addr (30003 bytes) peer=1
received: pong (8 bytes) peer=1
received: inv (37 bytes) peer=1
got inv: tx f6bfd86ac1d0a4a981fdb950da97c630254e716e3b05ef872a2ca94ebf892e7c  new peer=1
--- snip

(no headers response to headers request)

Connecting a fresh node to a non-pruned full node results in:

--- snip
received: getheaders (997 bytes) peer=1
getheaders -1 to 0000000000000000000000000000000000000000000000000000000000000000 from peer=1
sending: headers (1 bytes) peer=1
received: pong (8 bytes) peer=1
received: headers (162003 bytes) peer=1
more getheaders (2000) to end to peer=1 (startheight:357399)
sending: getheaders (741 bytes) peer=1
Requesting block 00000000839a8e6886ab5951d76f411475428afc90947ee320161bbf18eb6048 (1) peer=1
--- snip

Will also try to connect a "half-synced" full node to a pruned node where the half-synced node has just some blocks over the prune level of the pruned node. But this needs a little bit time to set up.

Member

jonasschnelli commented May 21, 2015

Did some testing.
Connected a fresh mainnet node to a pruned node.
I was expecting that my fresh node can load at least the whole headers-first-chain but encountered:

--- snip
sending: pong (8 bytes) peer=1
received: getheaders (997 bytes) peer=1
getheaders -1 to 0000000000000000000000000000000000000000000000000000000000000000 from peer=1
sending: headers (1 bytes) peer=1
received: addr (30003 bytes) peer=1
received: pong (8 bytes) peer=1
received: inv (37 bytes) peer=1
got inv: tx f6bfd86ac1d0a4a981fdb950da97c630254e716e3b05ef872a2ca94ebf892e7c  new peer=1
--- snip

(no headers response to headers request)

Connecting a fresh node to a non-pruned full node results in:

--- snip
received: getheaders (997 bytes) peer=1
getheaders -1 to 0000000000000000000000000000000000000000000000000000000000000000 from peer=1
sending: headers (1 bytes) peer=1
received: pong (8 bytes) peer=1
received: headers (162003 bytes) peer=1
more getheaders (2000) to end to peer=1 (startheight:357399)
sending: getheaders (741 bytes) peer=1
Requesting block 00000000839a8e6886ab5951d76f411475428afc90947ee320161bbf18eb6048 (1) peer=1
--- snip

Will also try to connect a "half-synced" full node to a pruned node where the half-synced node has just some blocks over the prune level of the pruned node. But this needs a little bit time to set up.

@sdaftuar

This comment has been minimized.

Show comment
Hide comment
@sdaftuar

sdaftuar May 28, 2015

Member

@jonasschnelli The fresh node won't load the whole headers from the pruning node on startup, because we only choose full NODE_NETWORK nodes to sync the headers chain. However, if you were to wait long enough for the pruning node to inv a block, then at that point at getheaders message should be sent to the pruning node, and headers should then sync.

Member

sdaftuar commented May 28, 2015

@jonasschnelli The fresh node won't load the whole headers from the pruning node on startup, because we only choose full NODE_NETWORK nodes to sync the headers chain. However, if you were to wait long enough for the pruning node to inv a block, then at that point at getheaders message should be sent to the pruning node, and headers should then sync.

@juscamarena

This comment has been minimized.

Show comment
Hide comment
@juscamarena

juscamarena Jun 10, 2015

Contributor

Nice, I'm running 0.11.0rc1 and bitcoin in %appdata% is only taking up 1.33 GB of space.. Would love to be able to relay blocks. I'd actually leave this on with this small of a memory footprint.

Contributor

juscamarena commented Jun 10, 2015

Nice, I'm running 0.11.0rc1 and bitcoin in %appdata% is only taking up 1.33 GB of space.. Would love to be able to relay blocks. I'd actually leave this on with this small of a memory footprint.

src/main.cpp
@@ -4153,6 +4151,14 @@ bool static ProcessMessage(CNode* pfrom, string strCommand, CDataStream& vRecv,
LogPrint("net", " getblocks stopping at %d %s\n", pindex->nHeight, pindex->GetBlockHash().ToString());
break;
}
+ // If pruning, don't inv blocks unless we have on disk and are likely to still have
+ // for some reasonable time window that block relay might require.
+ const int nPrunedBlocksLikelyToHave = MIN_BLOCKS_TO_KEEP - 6;

This comment has been minimized.

@sipa

sipa Jun 14, 2015

Member

Can you use a symbolic constant here? Or some calculation based on consensus params?

@sipa

sipa Jun 14, 2015

Member

Can you use a symbolic constant here? Or some calculation based on consensus params?

This comment has been minimized.

@sdaftuar

sdaftuar Jun 17, 2015

Member

@sipa Fixed to use a calculation based on consensus params.

@sdaftuar

sdaftuar Jun 17, 2015

Member

@sipa Fixed to use a calculation based on consensus params.

@sipa

This comment has been minimized.

Show comment
Hide comment
@sipa

sipa Jun 14, 2015

Member

I haven't followed up on all proposed changes related to this. Is there a plan to also make peers use the inventories we send out? With a change like this, the immediate fetching logic may react to an fClient peer sending a block inv, but the asynchronous fetching won't, making this (for now) useless for reorgs, for example.

Member

sipa commented Jun 14, 2015

I haven't followed up on all proposed changes related to this. Is there a plan to also make peers use the inventories we send out? With a change like this, the immediate fetching logic may react to an fClient peer sending a block inv, but the asynchronous fetching won't, making this (for now) useless for reorgs, for example.

@sipa

This comment has been minimized.

Show comment
Hide comment
@sipa

sipa Jun 14, 2015

Member

I'm confused by the several related and seemingly-overlapping PRs here.

Member

sipa commented Jun 14, 2015

I'm confused by the several related and seemingly-overlapping PRs here.

@sdaftuar

This comment has been minimized.

Show comment
Hide comment
@sdaftuar

sdaftuar Jun 16, 2015

Member

@sipa Sorry for the PR confusion. #6130 fixes a bug where pruning nodes would respond badly to a getblocks message by inv'ing blocks they don't actually have. This pull adds block relaying for pruning nodes, which would not work without the fix in #6130, so I built it on top.

Initially I thought the bug fix in #6130 should stand on its own, but on further thought I think it could only be triggered once we actually implement block relay. I'll go ahead and close that pull in favor of this one.

As for next steps, yes this pull is insufficient for relaying reorgs to 0.10 and later peers. I was thinking that we'd hopefully be able to put together a sharding implementation and at that point we'd also implement the more intelligent block requesting at the same time. I guess if that work seems like it's taking too long, we could (I guess before the 0.12 release) instead do some other change to the parallel-fetch code to try to request recent blocks from pruning peers. Would you rather see a solution like that as part of this pull?

Member

sdaftuar commented Jun 16, 2015

@sipa Sorry for the PR confusion. #6130 fixes a bug where pruning nodes would respond badly to a getblocks message by inv'ing blocks they don't actually have. This pull adds block relaying for pruning nodes, which would not work without the fix in #6130, so I built it on top.

Initially I thought the bug fix in #6130 should stand on its own, but on further thought I think it could only be triggered once we actually implement block relay. I'll go ahead and close that pull in favor of this one.

As for next steps, yes this pull is insufficient for relaying reorgs to 0.10 and later peers. I was thinking that we'd hopefully be able to put together a sharding implementation and at that point we'd also implement the more intelligent block requesting at the same time. I guess if that work seems like it's taking too long, we could (I guess before the 0.12 release) instead do some other change to the parallel-fetch code to try to request recent blocks from pruning peers. Would you rather see a solution like that as part of this pull?

@sipa

This comment has been minimized.

Show comment
Hide comment
@sipa

sipa Jun 16, 2015

Member

No, just querying what the current state/plan is :)

Member

sipa commented Jun 16, 2015

No, just querying what the current state/plan is :)

sdaftuar added some commits May 13, 2015

Do not inv old or missing blocks when pruning
When responding to a getblocks message, only return inv's as
long as we HAVE_DATA for blocks in the chain, and only for blocks
that we aren't likely to delete in the near future.
@sipa

This comment has been minimized.

Show comment
Hide comment
@sipa

sipa Jun 21, 2015

Member

Been running on bitcoin.sipa.be for a week, no problems.

Member

sipa commented Jun 21, 2015

Been running on bitcoin.sipa.be for a week, no problems.

@sdaftuar

This comment has been minimized.

Show comment
Hide comment
@sdaftuar

sdaftuar Jul 30, 2015

Member

@jonasschnelli I saw your comment in #6460 about this PR -- I think this could be merged as-is without any new service bits. Adding a service bit would be needed to implement a system for selectively downloading historical blocks from less-than-full nodes, but I don't think it's needed to enable relaying at the tip.

Member

sdaftuar commented Jul 30, 2015

@jonasschnelli I saw your comment in #6460 about this PR -- I think this could be merged as-is without any new service bits. Adding a service bit would be needed to implement a system for selectively downloading historical blocks from less-than-full nodes, but I don't think it's needed to enable relaying at the tip.

@jonasschnelli

This comment has been minimized.

Show comment
Hide comment
@jonasschnelli

jonasschnelli Jul 30, 2015

Member

@sdaftuar: Agreed. Will test again soon and sry for the delay.

Member

jonasschnelli commented Jul 30, 2015

@sdaftuar: Agreed. Will test again soon and sry for the delay.

@sipa

This comment has been minimized.

Show comment
Hide comment
@sipa

sipa Jul 30, 2015

Member

ACK (have done a "it works" test before).

@laanwj Acceptable for 0.11?

Member

sipa commented Jul 30, 2015

ACK (have done a "it works" test before).

@laanwj Acceptable for 0.11?

@jonasschnelli

This comment has been minimized.

Show comment
Hide comment
@jonasschnelli

jonasschnelli Jul 31, 2015

Member

Running since >18h on a pruned synced full node (master).
Served ~291887 blocks since than.

jonasschnelli@server6:~$ cat ~/.bitcoin/debug.log | grep "sending: block" | wc -l
291887

tested ACK

Member

jonasschnelli commented Jul 31, 2015

Running since >18h on a pruned synced full node (master).
Served ~291887 blocks since than.

jonasschnelli@server6:~$ cat ~/.bitcoin/debug.log | grep "sending: block" | wc -l
291887

tested ACK

@gmaxwell

This comment has been minimized.

Show comment
Hide comment
@gmaxwell

gmaxwell Sep 7, 2015

Member

I've been testing this against master and it's happily relaying at the tip among my peers without issue. Appears to work great.

Has someone tried the case where the peer is on a fork and needs to fetch futher back than the pruning window to complete the reorg?

Member

gmaxwell commented Sep 7, 2015

I've been testing this against master and it's happily relaying at the tip among my peers without issue. Appears to work great.

Has someone tried the case where the peer is on a fork and needs to fetch futher back than the pruning window to complete the reorg?

@gmaxwell

This comment has been minimized.

Show comment
Hide comment
@gmaxwell

gmaxwell Sep 8, 2015

Member

To be clear: ACK.

Member

gmaxwell commented Sep 8, 2015

To be clear: ACK.

@dcousens

This comment has been minimized.

Show comment
Hide comment
@dcousens

dcousens Sep 9, 2015

Contributor

concept ACK

Contributor

dcousens commented Sep 9, 2015

concept ACK

@sdaftuar

This comment has been minimized.

Show comment
Hide comment
@sdaftuar

sdaftuar Sep 21, 2015

Member

@laanwj I think this is ready to be merged; are there any outstanding concerns here?

Member

sdaftuar commented Sep 21, 2015

@laanwj I think this is ready to be merged; are there any outstanding concerns here?

@laanwj laanwj merged commit ae6f957 into bitcoin:master Sep 23, 2015

1 check passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details

laanwj added a commit that referenced this pull request Sep 23, 2015

Merge pull request #6148
ae6f957 Enable block relay when pruning (Suhas Daftuar)
0da6ae2 Do not inv old or missing blocks when pruning (Suhas Daftuar)

luke-jr added a commit to luke-jr/bitcoin that referenced this pull request Dec 28, 2015

Merge pull request #6148: Relay blocks when pruning
ae6f957 Enable block relay when pruning (Suhas Daftuar)
0da6ae2 Do not inv old or missing blocks when pruning (Suhas Daftuar)

@str4d str4d referenced this pull request in zcash/zcash Jul 14, 2017

Open

Bitcoin 0.12 P2P/Net PRs 1 #2534

@str4d str4d referenced this pull request in zcash/zcash Dec 19, 2017

Open

Relay blocks when pruning #2815

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment