Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

protocols/kad: Implement NetworkBehaviour::inject_address_change #1649

Merged
merged 8 commits into from
Aug 6, 2020

Conversation

mxinden
Copy link
Member

@mxinden mxinden commented Jul 7, 2020

With 826f513 a StreamMuxer can notify that the address of a remote
peer changed. This is needed to support the transport protocol QUIC as
remotes can change their IP addresses within the lifetime of a single
connection.

This commit implements the NetworkBehaviour::inject_address_change
handler to update the Kademlia routing table accordingly.

Follow up for #1621.

With 826f513 a `StreamMuxer` can notify that the address of a remote
peer changed. This is needed to support the transport protocol QUIC as
remotes can change their IP addresses within the lifetime of a single
connection.

This commit implements the `NetworkBehaviour::inject_address_change`
handler to update the Kademlia routing table accordingly.
@mxinden mxinden requested review from romanb and tomaka July 7, 2020 18:50
protocols/kad/src/behaviour.rs Outdated Show resolved Hide resolved
protocols/kad/src/behaviour.rs Outdated Show resolved Hide resolved
Replace the old address with the new address iff the peer is in the
routing table and the old address was found in the `Addresses` for the
given peer.
@mxinden
Copy link
Member Author

mxinden commented Jul 31, 2020

Thanks for the review @romanb!

With my recent changes a new address is only added to the Addresses of a peer iff the old address was present.

It is probably also fine to always insert new even if old is not present (but only if the peer is already in the routing table, of course).

Can you expand on the scenario where the old address would not be present even though we are connected to the node?

@romanb romanb self-requested a review August 3, 2020 13:15
core/src/connection.rs Show resolved Hide resolved
}

// Update query address cache.
for query in self.queries.iter_mut() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think there is a point in doing this, but I may be mistaken. The query address "cache" is used to make sure when a query wants to dial a certain address that often may not (yet) be in the routing table, that address is returned by addresses_of_peer. Addresses in the routing table are always returned from addresses_of_peer. So when the new address is successfully put in the routing table, it will be considered for dialing. Please correct me if I'm wrong.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think there is a point in doing this, but I may be mistaken.

Is this not necessary for remote nodes that are not part of the local routing table?

Given two connected nodes: local node A and remote node B. Say node B is not in node A's routing table. Additionally node B is part of the QueryInner::addresses list of an ongoing query on node A. Say Node B triggers an address change and then disconnects. Later on the earlier mentioned query on node A would like to connect to node B. Without replacing the address in the QueryInner::addresses set node A would attempt to dial the old and not the new address.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given two connected nodes: local node A and remote node B. Say node B is not in node A's routing table. Additionally node B is part of the QueryInner::addresses list of an ongoing query on node A. Say Node B triggers an address change and then disconnects. Later on the earlier mentioned query on node A would like to connect to node B. Without replacing the address in the QueryInner::addresses set node A would attempt to dial the old and not the new address.

Fair enough, I guess given the right timing, this situation can occur. I still wonder though whether this is worth this special attention and effort. Maybe there is no good reason, but changing the addresses that a query discovered like this seems strange to me. At least it adds a subtle twist to reasoning about which addresses a query uses. The other aspect is that, depending on how often addresses change like this (certainly increasingly often the more peers you are connected to), iterating through all discovered addresses of that peer in all currently ongoing queries seems like a search for a needle in a haystack, if there is substantial traffic. Neither of these points is a good technical reason against doing this, I just had to express that I feel a bit uncomfortable about it and why that is. I guess the decision is yours, ultimately.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At least it adds a subtle twist to reasoning about which addresses a query uses.

That is a good point.

Still I would prefer to uphold correctness and revisit the decision once we have a transport protocol that supports address changes and the given logic proves to be a performance problem. I have included a note in ad81fd6 summarizing this discussion.

@romanb
Copy link
Contributor

romanb commented Aug 4, 2020

Can you expand on the scenario where the old address would not be present even though we are connected to the node?

I don't remember what I had in mind at the time. Your new approach seems fine to me.

@mxinden mxinden merged commit 5139ec3 into libp2p:master Aug 6, 2020
@mxinden
Copy link
Member Author

mxinden commented Aug 6, 2020

Thank you for the help @romanb.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants