New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
net: redundant connections with a single peer #22559
Comments
Can you share the result for |
Sure! {moved extra info to original post about being an I2P-only node} Edit: I misread your initial request but here's the
bitcoin.conf:
(note: removed the |
As per |
I tried to replicate the bug by waiting for a random I2P peer to make an inbound connection. Afterwards, I call But I can't reproduce the bug in the OP, the call fails via debug.log: I think one or two things are happening here:
|
Are both peers you? (an inbound peer running an earlier version could still double connect to you.) |
Nope, although they do have an @dunxen in their User Agent. But I'm unsure who that is. We are both running the i2p-port=0 branch. |
Just restarted a clearnet/onion/i2p node and both peers were briefly double-connected to me as double inbound peers, but only for a few seconds and now down to one connection each. @duncandean, at what commit is your I2P service running? (cli -version or -netinfo will tell you) |
Sounds about right, I've observed this as well. I only spotted the bug because I frequently look at my peer list to see how quickly people are onboarding to I2P. But fwiw, my node was up for about a week before I observed the 12 hour long connections in the OP. I'm going to clone my I2P machine to see if I can reproduce the bug by connecting to myself. |
I'm at fd557ce. Also, I do have an |
Ahh, thanks for clarifying! I wonder why my node automatically maintained a connection to you, though. I don't use any addnodes. |
@tryphe you will probably see the address if you run rpc getnodeaddresses 0 "i2p", indicating its in your addrman. Then look at how many I2P peers your node knows (cli -addrinfo gives the totals). |
18 I2P peers currently. |
I was able to reproduce the behavior of the bug with 2 machines on mainnet. To reproduce: Optional steps on installing if you don't have an I2P node:
If you are fast enough, the machines will maintain two connections, and the bug occurs. If you are too slow, normal expected behavior will occur, and each subsequent connection will close. Note: also works with |
I didn't test it on normal IP but I'm going to assume the latency has something to do with the bug. Otherwise it would have already been reported.... maybe. If anyone can try to reproduce it on non-i2p with/without some latency, that would be sweet! |
Doh, this seems like a duplicate of #21389. Just realized, sorry. Will follow up there if there's a good solution found. |
@jonatack I think we should re-open the original issue instead of keeping this one open. Regarding this comment, it should be fixed, but isn't. But feel free to re-open this one if you want. Note: I removed the |
I didn't try setting port=8333 while testing #22112, so thanks for checking and reporting on that. I have |
I think this is a different issue than #21389. The latter allowed This issue looks to be about |
Thanks vasild! That makes sense. If one socket is already open, it works as intended. I'll reopen this. |
@jonatack I tried your patch(looks like the comment is deleted now) with some added verbosity and both nodes disconnect each other instead of maintaining redundant connections. This looks like an observation timing problem. From the perspective of node A and B, they both tried to connect first. If they disconnect on accept, both are disconnected. If we make them disconnect after accept, I assume both will disconnect unless there is some sort of agreement made on who will disconnect. Something like: But putting in this kind of logic will require the other peer to have an updated binary. Maybe if they don't disconnect after a certain period and they are the one with a lower peer address, disconnect one of the redundant sockets after a certain period. Or maybe it can be done with just some |
I observed the bug happening again today while restarting my node on the current master branch with a different 22.99.0 peer. Maybe we can add a task scheduler routine to check for duplicate connections? It seems like we can't patch into existing functionality so this seems like the most obvious thing to do. |
Was the double connection in+in or in+out? An I2P peer? |
In + out, I2P |
I have a (hopefully straight forward) question that's not exactly the same that OP mentioned, but somewhat related: Would |
There is no such detection. So we can connect to e.g. There is only a detection that we do not keep a connection to ourselves. It works roughly like this: when bitcoin/src/net_processing.cpp Lines 2535 to 2541 in 192a959
|
Thank you. How difficult would it be to detect this, if desired? Is there some type of fingerprint/key/identifier that could be matched? Or would this require a whole new mechanism to be designed & implemented? [1] Also, would it be possible for an attacker to just create 13,337 Hidden Services (e.g., all (One could also reframe the above as: can I protect my node from Sybil-attacks by creating an undisclosed number of extra hidden services / I2P destinations?) [1] If yes, it could potentially be similar to what JoinMarket is doing with its Fidelity Bonds. (Assuming one wants protection from malicious Sybil activity ... otherwise I guess there would be a non-deposit solution.) However, I am aware that this would be a very major change and is probably not needed. |
I noticed 2 inbound connections to me from a Although I'm not sure whether this is conceptually more dangerous than someone just creating a bunch of different I2P addresses and connecting to me normally, which is already possible without the bug. |
I confirm the 2x inbound case: just execute the below two times quickly one after another (in different terminals, e.g. start connecting to the same peer at the same time):
The relevant code is: Lines 2214 to 2233 in f6f7a12
This is the code that opens a new connection - on line 2216 or 2219 it checks if we are already connected to the peer (this looks into This is not I2P specific, but if |
Thanks @vasild! I can confirm this. I initially tried this with a bunch of peers that weren't myself, to no avail. Seems like my latency was too low to reproduce the bug after being connected to the mixnet for a while. But I tried again with a fresh VM and a fresh I2P address and was able to easily open 4 connections to my main node which remained open.
|
If we modify Line 1528 in 602c8eb
The new connection is accepted but the node is not added to Lines 1205 to 1208 in 602c8eb
pnode is not created and added to vNodes until the end of the function, when we know hSocket is valid. But intuitively it doesn't seem to me like the rest of the code should block for very long. At least not enough time for multiple other connections to occur. Strange?
We should look into fixing both inbound and outbound races so that implementations with OR without the outbound race won't be able to create duplicate connections to future nodes that have been patched for this bug. |
To reproduce:
#22559 (comment) (general I2P setup, in+out)
#22559 (comment) (multiple in, multiple out)
I've observed my node keeping multiple connections to the same peer in various ways, due to long I2P connection negotiation times, as it's not uncommon to take 10 to 20 seconds to connect to a peer. If additional connections are made during negotiation with the same peer, all connections remain open.
The bug should occur on any network. This is an I2P only node, which I'm sure increases the chances of this bug happening because my peers.dat is tiny. Note that I do not have any
addnode=
in my config or anything like that.1 inbound + 1 outbound:
I connected to a peer. 18 seconds later, it connected to me. These connections persisted for over 12 hours. Both peers are running on a very recent
22.99.0
master branch.2 inbound or 2 inbound: (2 can be any number with the right timing)
A node connected to me twice around the same time. These connections persisted for over 12 hours. This peer was running the
22.0.0
release.getpeerinfo
data for 1 inbound + 1 outbound peers:getpeerinfo
data for 2 inbound peers:The text was updated successfully, but these errors were encountered: