Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

net: redundant connections with a single peer #22559

Open
tryphe opened this issue Jul 26, 2021 · 32 comments
Open

net: redundant connections with a single peer #22559

tryphe opened this issue Jul 26, 2021 · 32 comments
Labels

Comments

@tryphe
Copy link
Contributor

tryphe commented Jul 26, 2021

To reproduce:
#22559 (comment) (general I2P setup, in+out)
#22559 (comment) (multiple in, multiple out)

I've observed my node keeping multiple connections to the same peer in various ways, due to long I2P connection negotiation times, as it's not uncommon to take 10 to 20 seconds to connect to a peer. If additional connections are made during negotiation with the same peer, all connections remain open.

The bug should occur on any network. This is an I2P only node, which I'm sure increases the chances of this bug happening because my peers.dat is tiny. Note that I do not have any addnode= in my config or anything like that.

1 inbound + 1 outbound:
I connected to a peer. 18 seconds later, it connected to me. These connections persisted for over 12 hours. Both peers are running on a very recent 22.99.0 master branch.

2 inbound or 2 inbound: (2 can be any number with the right timing)
A node connected to me twice around the same time. These connections persisted for over 12 hours. This peer was running the 22.0.0 release.

getpeerinfo data for 1 inbound + 1 outbound peers:

  {
    "id": 680,
    "addr": "jz3s4eurm5vzjresf4mwo7oni4bk36daolwxh4iqtewakylgkxmq.b32.i2p:0",
    "addrbind": "bitcornrd36coazsbzsz4pdebyzvaplmsalq4kpoljmn6cg6x5zq.b32.i2p:0",
    "network": "i2p",
    "services": "0000000000000409",
    "servicesnames": [
      "NETWORK",
      "WITNESS",
      "NETWORK_LIMITED"
    ],
    "relaytxes": true,
    "lastsend": 1627290126,
    "lastrecv": 1627290124,
    "last_transaction": 1627290120,
    "last_block": 0,
    "bytessent": 3930317,
    "bytesrecv": 13011805,
    "conntime": 1627229534,
    "timeoffset": -2,
    "pingtime": 3.264304,
    "minping": 0.719468,
    "version": 70016,
    "subver": "/Satoshi:22.99.0(@dunxen)/",
    "inbound": false,
    "bip152_hb_to": false,
    "bip152_hb_from": false,
    "startingheight": 692610,
    "synced_headers": 692727,
    "synced_blocks": 692727,
    "inflight": [
    ],
    "addr_processed": 7049,
    "addr_rate_limited": 1244,
    "permissions": [
    ],
    "minfeefilter": 0.00001000,
    "bytessent_per_msg": {
      "addrv2": 121698,
      "feefilter": 32,
      "getaddr": 24,
      "getdata": 442191,
      "getheaders": 1053,
      "headers": 8480,
      "inv": 3307633,
      "notfound": 316,
      "ping": 16128,
      "pong": 16128,
      "sendaddrv2": 24,
      "sendcmpct": 66,
      "sendheaders": 24,
      "tx": 16345,
      "verack": 24,
      "version": 127,
      "wtxidrelay": 24
    },
    "bytesrecv_per_msg": {
      "addrv2": 167167,
      "feefilter": 32,
      "getdata": 2090,
      "getheaders": 1053,
      "headers": 12190,
      "inv": 3572569,
      "notfound": 122,
      "ping": 16128,
      "pong": 16128,
      "sendaddrv2": 24,
      "sendcmpct": 66,
      "sendheaders": 24,
      "tx": 9208916,
      "verack": 24,
      "version": 136,
      "wtxidrelay": 24
    },
    "connection_type": "outbound-full-relay"
  },
  {
    "id": 682,
    "addr": "jz3s4eurm5vzjresf4mwo7oni4bk36daolwxh4iqtewakylgkxmq.b32.i2p:0",
    "addrbind": "bitcornrd36coazsbzsz4pdebyzvaplmsalq4kpoljmn6cg6x5zq.b32.i2p:0",
    "network": "i2p",
    "services": "0000000000000409",
    "servicesnames": [
      "NETWORK",
      "WITNESS",
      "NETWORK_LIMITED"
    ],
    "relaytxes": true,
    "lastsend": 1627290124,
    "lastrecv": 1627290123,
    "last_transaction": 1627290013,
    "last_block": 0,
    "bytessent": 1585430,
    "bytesrecv": 7794448,
    "conntime": 1627229552,
    "timeoffset": -13,
    "pingtime": 1.396263,
    "minping": 0.759804,
    "version": 70016,
    "subver": "/Satoshi:22.99.0(@dunxen)/",
    "inbound": true,
    "bip152_hb_to": false,
    "bip152_hb_from": false,
    "startingheight": 692610,
    "synced_headers": 692727,
    "synced_blocks": 692727,
    "inflight": [
    ],
    "addr_processed": 6055,
    "addr_rate_limited": 1188,
    "permissions": [
    ],
    "minfeefilter": 0.00001000,
    "bytessent_per_msg": {
      "addrv2": 124133,
      "feefilter": 32,
      "getdata": 90391,
      "getheaders": 1053,
      "headers": 8455,
      "inv": 1201793,
      "ping": 16160,
      "pong": 16160,
      "sendaddrv2": 24,
      "sendcmpct": 66,
      "sendheaders": 24,
      "tx": 126964,
      "verack": 24,
      "version": 127,
      "wtxidrelay": 24
    },
    "bytesrecv_per_msg": {
      "addrv2": 145913,
      "cmpctblock": 2422,
      "feefilter": 32,
      "getaddr": 24,
      "getdata": 14266,
      "getheaders": 1053,
      "headers": 12084,
      "inv": 5364780,
      "notfound": 61,
      "ping": 16160,
      "pong": 16160,
      "sendaddrv2": 24,
      "sendcmpct": 66,
      "sendheaders": 24,
      "tx": 2221195,
      "verack": 24,
      "version": 136,
      "wtxidrelay": 24
    },
    "connection_type": "inbound"
  }

getpeerinfo data for 2 inbound peers:

 {
    "id": 1326,
    "addr": "acgncqkgqekcxaagpes6ubfiuhg54ijklwcupbnitte5svh3a3bq.b32.i2p:0",
    "addrbind": "bitcornrd36coazsbzsz4pdebyzvaplmsalq4kpoljmn6cg6x5zq.b32.i2p:0",
    "network": "i2p",
    "services": "0000000000000409",
    "servicesnames": [
      "NETWORK",
      "WITNESS",
      "NETWORK_LIMITED"
    ],
    "relaytxes": true,
    "lastsend": 1629449575,
    "lastrecv": 1629449571,
    "last_transaction": 1629449402,
    "last_block": 0,
    "bytessent": 4675731,
    "bytesrecv": 5393457,
    "conntime": 1629383715,
    "timeoffset": -3,
    "pingtime": 2.009341,
    "minping": 0.5858950000000001,
    "version": 70016,
    "subver": "/Satoshi:22.0.0(MN@ca)/",
    "inbound": true,
    "bip152_hb_to": false,
    "bip152_hb_from": false,
    "startingheight": 696531,
    "synced_headers": 696664,
    "synced_blocks": 696664,
    "inflight": [
    ],
    "addr_relay_enabled": true,
    "addr_processed": 6062,
    "addr_rate_limited": 0,
    "permissions": [
    ],
    "minfeefilter": 0.00001000,
    "bytessent_per_msg": {
      "addrv2": 149637,
      "cmpctblock": 5710,
      "feefilter": 32,
      "getdata": 2941,
      "getheaders": 1053,
      "headers": 9752,
      "inv": 3847162,
      "notfound": 3173,
      "ping": 17568,
      "pong": 17312,
      "sendaddrv2": 24,
      "sendcmpct": 66,
      "sendheaders": 24,
      "tx": 621102,
      "verack": 24,
      "version": 127,
      "wtxidrelay": 24
    },
    "bytesrecv_per_msg": {
      "addrv2": 130686,
      "feefilter": 32,
      "getaddr": 24,
      "getdata": 51317,
      "getheaders": 1053,
      "headers": 10519,
      "inv": 5142494,
      "ping": 17312,
      "pong": 17568,
      "sendaddrv2": 24,
      "sendcmpct": 66,
      "sendheaders": 24,
      "tx": 22157,
      "verack": 24,
      "version": 133,
      "wtxidrelay": 24
    },
    "connection_type": "inbound"
  },
  {
    "id": 1327,
    "addr": "acgncqkgqekcxaagpes6ubfiuhg54ijklwcupbnitte5svh3a3bq.b32.i2p:0",
    "addrbind": "bitcornrd36coazsbzsz4pdebyzvaplmsalq4kpoljmn6cg6x5zq.b32.i2p:0",
    "network": "i2p",
    "services": "0000000000000409",
    "servicesnames": [
      "NETWORK",
      "WITNESS",
      "NETWORK_LIMITED"
    ],
    "relaytxes": true,
    "lastsend": 1629449578,
    "lastrecv": 1629449580,
    "last_transaction": 1629448430,
    "last_block": 0,
    "bytessent": 4869327,
    "bytesrecv": 5433036,
    "conntime": 1629383742,
    "timeoffset": -21,
    "pingtime": 1.979714,
    "minping": 0.69269,
    "version": 70016,
    "subver": "/Satoshi:22.0.0(MN@ca)/",
    "inbound": true,
    "bip152_hb_to": false,
    "bip152_hb_from": false,
    "startingheight": 696531,
    "synced_headers": 696664,
    "synced_blocks": 696664,
    "inflight": [
    ],
    "addr_relay_enabled": true,
    "addr_processed": 6188,
    "addr_rate_limited": 0,
    "permissions": [
    ],
    "minfeefilter": 0.00001000,
    "bytessent_per_msg": {
      "addrv2": 148858,
      "feefilter": 32,
      "getdata": 1932,
      "getheaders": 1053,
      "headers": 9858,
      "inv": 3872869,
      "notfound": 2259,
      "ping": 17568,
      "pong": 17312,
      "sendaddrv2": 24,
      "sendcmpct": 66,
      "sendheaders": 24,
      "tx": 797297,
      "verack": 24,
      "version": 127,
      "wtxidrelay": 24
    },
    "bytesrecv_per_msg": {
      "addrv2": 133478,
      "feefilter": 32,
      "getaddr": 24,
      "getdata": 68654,
      "getheaders": 1053,
      "headers": 10625,
      "inv": 5171958,
      "ping": 17312,
      "pong": 17568,
      "sendaddrv2": 24,
      "sendcmpct": 66,
      "sendheaders": 24,
      "tx": 12037,
      "verack": 24,
      "version": 133,
      "wtxidrelay": 24
    },
    "connection_type": "inbound"
  }
@tryphe tryphe added the Bug label Jul 26, 2021
@ghost
Copy link

ghost commented Jul 26, 2021

Can you share the result for bitcoin-cli -netinfo and relevant options from bitcoin.conf ? Does this happen only with i2p peers?

@tryphe
Copy link
Contributor Author

tryphe commented Jul 27, 2021

Can you share the result for bitcoin-cli -netinfo and relevant options from bitcoin.conf ? Does this happen only with i2p peers?

Sure! {moved extra info to original post about being an I2P-only node}

Edit: I misread your initial request but here's the -netinfo output:

Bitcoin Core v22.99.0-8193294caba0-dirty - 70016/Satoshi:22.99.0/

        ipv4    ipv6   onion     i2p   total   block  manual
in         0       0       0       3       3
out        0       0       0      11      11       0       2
total      0       0       0      14      14

Local addresses
bitcornrd36coazsbzsz4pdebyzvaplmsalq4kpoljmn6cg6x5zq.b32.i2p     port      0    score      4

getnetworkinfo:

{
  "version": 229900,
  "subversion": "/Satoshi:22.99.0/",
  "protocolversion": 70016,
  "localservices": "0000000000000409",
  "localservicesnames": [
    "NETWORK",
    "WITNESS",
    "NETWORK_LIMITED"
  ],
  "localrelay": true,
  "timeoffset": -6,
  "networkactive": true,
  "connections": 10,
  "connections_in": 4,
  "connections_out": 6,
  "networks": [
    {
      "name": "ipv4",
      "limited": true,
      "reachable": false,
      "proxy": "",
      "proxy_randomize_credentials": false
    },
    {
      "name": "ipv6",
      "limited": true,
      "reachable": false,
      "proxy": "",
      "proxy_randomize_credentials": false
    },
    {
      "name": "onion",
      "limited": true,
      "reachable": false,
      "proxy": "",
      "proxy_randomize_credentials": false
    },
    {
      "name": "i2p",
      "limited": false,
      "reachable": true,
      "proxy": "127.0.0.1:7656",
      "proxy_randomize_credentials": false
    }
  ],
  "relayfee": 0.00001000,
  "incrementalfee": 0.00001000,
  "localaddresses": [
    {
      "address": "bitcornrd36coazsbzsz4pdebyzvaplmsalq4kpoljmn6cg6x5zq.b32.i2p",
      "port": 0,
      "score": 4
    }
  ],
  "warnings": "This is a pre-release test build - use at your own risk - do not use for mining or merchant applications"
}

bitcoin.conf:

listen=1
port=8333
bind=127.0.0.1
onlynet=i2p
i2pacceptincoming=1
i2psam=127.0.0.1:7656

(note: removed the port= line and the bug still occurs)

@tryphe tryphe changed the title net: redundant connections are made with a single peer net: redundant connections with a single peer Jul 27, 2021
@jonatack
Copy link
Contributor

jonatack commented Jul 27, 2021

Thanks for reporting and for testing an I2P service. Are you or the inbound peer running a version before #22112? I see you're setting port to 8333. (This might be a duplicate of #21389.)

@tryphe
Copy link
Contributor Author

tryphe commented Jul 27, 2021

Thanks for reporting. Are you or the inbound peer running a version before #22112? I see you're setting port to 8333. (This might be a duplicate of #21389.)

Negative. The port=0 change was merged on 7/13, and the version was bumped to 0.22 on 7/20. I just happened to leave my config the same.

@tryphe
Copy link
Contributor Author

tryphe commented Jul 27, 2021

As per getnetworkinfo, my I2P port is forced to 0, so I don't think this something to do with the bug.

@tryphe
Copy link
Contributor Author

tryphe commented Jul 27, 2021

I tried to replicate the bug by waiting for a random I2P peer to make an inbound connection. Afterwards, I call addconnection abc.b32.i2p full-outbound-relay. I commented out these lines to skip the regtest check.

But I can't reproduce the bug in the OP, the call fails via debug.log:
Failed to open new connection, already connected

I think one or two things are happening here:

  1. Some environmental variables are at play. I did not connect to the peer as quickly as when the bug was produced. Ping times on I2P are often 5 to 20 seconds. I'm not sure if this could cause the issue. I often see inbound/outbound duplicate connections to the same peer at the same time, however the latter connection almost always closes within a few seconds, except for in the case of the OP and one other time that I've observed. Perhaps the bug happens if we connect simultaneously, but only under certain conditions?
  2. Maybe the call stack which fails during my test is not the same as during real time. There are many places in which we call FindNode in various fashions (sometimes called directly or through AlreadyConnectedToAddress, for example) and does not fail or succeed verbosely, which may lead to an edge case somewhere?

@jonatack
Copy link
Contributor

Are both peers you? (an inbound peer running an earlier version could still double connect to you.)

@tryphe
Copy link
Contributor Author

tryphe commented Jul 27, 2021

Are both peers you? (an inbound peer running an earlier version could still double connect to you.)

Nope, although they do have an @dunxen in their User Agent. But I'm unsure who that is. We are both running the i2p-port=0 branch.

@jonatack
Copy link
Contributor

jonatack commented Jul 27, 2021

Just restarted a clearnet/onion/i2p node and both peers were briefly double-connected to me as double inbound peers, but only for a few seconds and now down to one connection each.

@duncandean, at what commit is your I2P service running? (cli -version or -netinfo will tell you)

@tryphe
Copy link
Contributor Author

tryphe commented Jul 27, 2021

Just restarted a clearnet/onion/i2p node and both peers were briefly double-connected to me as double inbound peers, but only for a few seconds and now down to one connection each.

Sounds about right, I've observed this as well. I only spotted the bug because I frequently look at my peer list to see how quickly people are onboarding to I2P. But fwiw, my node was up for about a week before I observed the 12 hour long connections in the OP.

I'm going to clone my I2P machine to see if I can reproduce the bug by connecting to myself.

@dunxen
Copy link
Contributor

dunxen commented Jul 27, 2021

@duncandean, at what commit is your I2P service running? (cli -version or -netinfo will tell you)

I'm at fd557ce.

Also, I do have an addnode=bitcorn... in my config.

@tryphe
Copy link
Contributor Author

tryphe commented Jul 27, 2021

@duncandean, at what commit is your I2P service running? (cli -version or -netinfo will tell you)

I'm at fd557ce.

Also, I do have an addnode=bitcorn... in my config.

Ahh, thanks for clarifying!

I wonder why my node automatically maintained a connection to you, though. I don't use any addnodes.

@jonatack
Copy link
Contributor

@tryphe you will probably see the address if you run rpc getnodeaddresses 0 "i2p", indicating its in your addrman. Then look at how many I2P peers your node knows (cli -addrinfo gives the totals).

@tryphe
Copy link
Contributor Author

tryphe commented Jul 27, 2021

@tryphe you will probably see the address if you run rpc getnodeaddresses 0 "i2p", indicating its in your addrman. Then look at how many I2P peers your node knows (cli -addrinfo gives the totals).

18 I2P peers currently.

@tryphe
Copy link
Contributor Author

tryphe commented Jul 27, 2021

I was able to reproduce the behavior of the bug with 2 machines on mainnet. To reproduce:

Optional steps on installing if you don't have an I2P node:
a. stackexchange howto for I2P bitcoin
b. Configure and confirm your i2pd mixnet(external) port and sam(local loopback) port in /etc/i2pd/i2pd.conf
c. Restart i2pd: sudo systemctl restart i2pd
d. Ensure the sam/mixnet ports are listening locally: ss -lt. You should see 127.0.0.1:sam_port(default 7656) and 0.0.0.0:mixnet_port(random port)
e. Open a hole in your firewall to your mixnet ports. One mixnet port for each machine.

  1. Obtain two I2P bitcoin nodes and ensure the I2P mixnet port is externally accessible on each machine.

  2. Start bitcoind or bitcoin-qt. Run getnetworkinfo until the service is established and you see your abcdef.b32.i2p address near the bottom of the output. Takes a few minutes.

  3. On machine A, enter addnode machine_b_address.b32.i2p onetry, and vice versa on machine B.

  4. Run the commands at the same time.

If you are fast enough, the machines will maintain two connections, and the bug occurs. If you are too slow, normal expected behavior will occur, and each subsequent connection will close.

Note: also works with addconnection instead of addnode, which has the same behavior bugwise, but tags the connection differently, ie. "Full Relay" vs "Manual".

@tryphe
Copy link
Contributor Author

tryphe commented Jul 27, 2021

I didn't test it on normal IP but I'm going to assume the latency has something to do with the bug. Otherwise it would have already been reported.... maybe. If anyone can try to reproduce it on non-i2p with/without some latency, that would be sweet!

@tryphe
Copy link
Contributor Author

tryphe commented Jul 27, 2021

Doh, this seems like a duplicate of #21389. Just realized, sorry. Will follow up there if there's a good solution found.

@tryphe tryphe closed this as completed Jul 27, 2021
@tryphe
Copy link
Contributor Author

tryphe commented Jul 27, 2021

@jonatack I think we should re-open the original issue instead of keeping this one open. Regarding this comment, it should be fixed, but isn't.

But feel free to re-open this one if you want.

Note: I removed the port= line from my config before and after testing, so it seems unrelated to that.

@jonatack
Copy link
Contributor

Note: I removed the port= line from my config before and after testing, so it seems unrelated to that.

I didn't try setting port=8333 while testing #22112, so thanks for checking and reporting on that. I have watch -netinfo 4 running all the time and haven't seen any persistent double connections lately but will keep an eye out. Good to finally know who the bitcorn...b32.i2p address is!

@vasild
Copy link
Contributor

vasild commented Jul 27, 2021

I think this is a different issue than #21389.

The latter allowed A to open more than one outgoing connections to B on different ports. This is I2P specific because ports are irrelevant in I2P (when using SAM 3.1), so connecting to B:port1 and B:port2 would be, for sure, connecting to the same peer.

This issue looks to be about A opening a connection to B while, at about the same time, B opening a connection to A. I think, with the right timing, this can happen also with other networks, i.e. is not I2P specific. I have not confirmed that with an experiment.

@tryphe
Copy link
Contributor Author

tryphe commented Jul 27, 2021

Thanks vasild! That makes sense. If one socket is already open, it works as intended. I'll reopen this.

@tryphe tryphe reopened this Jul 27, 2021
@tryphe
Copy link
Contributor Author

tryphe commented Jul 27, 2021

@jonatack I tried your patch(looks like the comment is deleted now) with some added verbosity and both nodes disconnect each other instead of maintaining redundant connections.

This looks like an observation timing problem. From the perspective of node A and B, they both tried to connect first. If they disconnect on accept, both are disconnected. If we make them disconnect after accept, I assume both will disconnect unless there is some sort of agreement made on who will disconnect. Something like: if (my_address < peer_address) { disconnect(); } that way only one side will disconnect and no communication is required.

But putting in this kind of logic will require the other peer to have an updated binary. Maybe if they don't disconnect after a certain period and they are the one with a lower peer address, disconnect one of the redundant sockets after a certain period.

Or maybe it can be done with just some FindNode() logic and an edge case is missing somewhere. I'm not sure.

@tryphe
Copy link
Contributor Author

tryphe commented Aug 2, 2021

I observed the bug happening again today while restarting my node on the current master branch with a different 22.99.0 peer.

Maybe we can add a task scheduler routine to check for duplicate connections? It seems like we can't patch into existing functionality so this seems like the most obvious thing to do.

@jonatack
Copy link
Contributor

jonatack commented Aug 2, 2021

Was the double connection in+in or in+out? An I2P peer?

@tryphe
Copy link
Contributor Author

tryphe commented Aug 2, 2021

Was the double connection in+in or in+out? An I2P peer?

In + out, I2P

@xanoni
Copy link

xanoni commented Aug 14, 2021

I have a (hopefully straight forward) question that's not exactly the same that OP mentioned, but somewhat related:

Would bitcoind ever connect to (and stay connected to) the same node more than once if that node was using some combination of Tor, I2P, and clearnet? Or would it detect this reliably and only keep one of the connections?

@vasild
Copy link
Contributor

vasild commented Aug 20, 2021

Would bitcoind ever connect to (and stay connected to) the same node more than once if that node was using some combination of Tor, I2P, and clearnet? Or would it detect this reliably and only keep one of the connections?

There is no such detection. So we can connect to e.g. 2.39.173.126:8333 and 2g5qfdkn2vvcbqhzcyvyiitg4ceukybxklraxjnu7atlhd22gdwywaid.onion:8333 and remain connected even if that is the same (multi-homed) node.

There is only a detection that we do not keep a connection to ourselves. It works roughly like this: when A connects to B, A assigns a "random" nonce to that connection and sends it as part of the VERSION message to B. When B receives the VERSION message it checks if the nonce from it equals any of the nonces of its outgoing connections - if it does, then B closes the just accepted connection, thinking that it connected to itself:

bitcoin/src/net_processing.cpp

Lines 2535 to 2541 in 192a959

// Disconnect if we connected to ourself
if (pfrom.IsInboundConn() && !m_connman.CheckIncomingNonce(nNonce))
{
LogPrintf("connected to self at %s, disconnecting\n", pfrom.addr.ToString());
pfrom.fDisconnect = true;
return;
}

@xanoni
Copy link

xanoni commented Aug 20, 2021

There is no such detection.

Thank you. How difficult would it be to detect this, if desired? Is there some type of fingerprint/key/identifier that could be matched? Or would this require a whole new mechanism to be designed & implemented? [1]

Also, would it be possible for an attacker to just create 13,337 Hidden Services (e.g., all .onion's or a mix of I2P and Tor) that all connect to the same bitcoind? Or does bitcoind only behave predictably with one address per network? I assume independent of the answer, an attacker could modify bitcoind to handle a large number of addresses predictably.

(One could also reframe the above as: can I protect my node from Sybil-attacks by creating an undisclosed number of extra hidden services / I2P destinations?)

[1] If yes, it could potentially be similar to what JoinMarket is doing with its Fidelity Bonds. (Assuming one wants protection from malicious Sybil activity ... otherwise I guess there would be a non-deposit solution.) However, I am aware that this would be a very major change and is probably not needed.

@tryphe
Copy link
Contributor Author

tryphe commented Aug 22, 2021

I noticed 2 inbound connections to me from a 22.0.0 peer which lasted many hours, so I updated the OP with the relevant peer info. I knew in+out was possible, but not in+in, which seems more dangerous as it seems like an I2P peer could possibly maliciously fill as many of my inbound connections as they want, with careful timing.

Although I'm not sure whether this is conceptually more dangerous than someone just creating a bunch of different I2P addresses and connecting to me normally, which is already possible without the bug.

@xanoni xanoni mentioned this issue Aug 23, 2021
@vasild
Copy link
Contributor

vasild commented Aug 23, 2021

I confirm the 2x inbound case: just execute the below two times quickly one after another (in different terminals, e.g. start connecting to the same peer at the same time):

bitcoin-cli addnode 4hllr6w55mbtemb3ebvlzl4zj6qke4si7zcob5qdyg63mjgq624a.b32.i2p:0 onetry

The relevant code is:

bitcoin/src/net.cpp

Lines 2214 to 2233 in f6f7a12

if (!pszDest) {
bool banned_or_discouraged = m_banman && (m_banman->IsDiscouraged(addrConnect) || m_banman->IsBanned(addrConnect));
if (IsLocal(addrConnect) || banned_or_discouraged || AlreadyConnectedToAddress(addrConnect)) {
return;
}
} else if (FindNode(std::string(pszDest)))
return;
CNode* pnode = ConnectNode(addrConnect, pszDest, fCountFailure, conn_type);
if (!pnode)
return;
if (grantOutbound)
grantOutbound->MoveTo(pnode->grantOutbound);
m_msgproc->InitializeNode(pnode);
{
LOCK(cs_vNodes);
vNodes.push_back(pnode);
}

This is the code that opens a new connection - on line 2216 or 2219 it checks if we are already connected to the peer (this looks into vNodes) and it adds the peer to vNodes later on line 2232. So, if two threads come here and pass the check "not already connected to that peer" they will both connect and add the peer to vNodes.

This is not I2P specific, but if ConnectNode() (line 2222) is slow like in I2P, that makes it more likely to occur.

@tryphe
Copy link
Contributor Author

tryphe commented Aug 24, 2021

Thanks @vasild! I can confirm this. I initially tried this with a bunch of peers that weren't myself, to no avail. Seems like my latency was too low to reproduce the bug after being connected to the mixnet for a while. But I tried again with a fresh VM and a fresh I2P address and was able to easily open 4 connections to my main node which remained open.

$ ./bitcoin-cli getpeerinfo | grep \"addr\":
    "addr": "yadcavfqg6urtrpss2zxylxumw6sbp5tgk6hon7uxosjippeulwa.b32.i2p:0",
    "addr": "yadcavfqg6urtrpss2zxylxumw6sbp5tgk6hon7uxosjippeulwa.b32.i2p:0",
    "addr": "yadcavfqg6urtrpss2zxylxumw6sbp5tgk6hon7uxosjippeulwa.b32.i2p:0",
    "addr": "yadcavfqg6urtrpss2zxylxumw6sbp5tgk6hon7uxosjippeulwa.b32.i2p:0",

@tryphe
Copy link
Contributor Author

tryphe commented Aug 24, 2021

If we modify OpenNetworkConnection() in vasild's comment above to remove the outbound race condition, there is still an inbound race condition in this call to AcceptConnection() via CConnMan::SocketHandler():

AcceptConnection(hListenSocket);

The new connection is accepted but the node is not added to vNodes until the end of nested call CreateNodeFromAcceptedSocket():

bitcoin/src/net.cpp

Lines 1205 to 1208 in 602c8eb

{
LOCK(cs_vNodes);
vNodes.push_back(pnode);
}

pnode is not created and added to vNodes until the end of the function, when we know hSocket is valid. But intuitively it doesn't seem to me like the rest of the code should block for very long. At least not enough time for multiple other connections to occur. Strange?

We should look into fixing both inbound and outbound races so that implementations with OR without the outbound race won't be able to create duplicate connections to future nodes that have been patched for this bug.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants