Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tcp #1845

Closed
wants to merge 4 commits into from
Closed

Tcp #1845

wants to merge 4 commits into from

Conversation

dvc94ch
Copy link
Contributor

@dvc94ch dvc94ch commented Nov 17, 2020

Superceeds #1789 .

Simplifies the tcp transport by using async-io instead of macros. This also helps people who can't update to the latest libp2p because it uses tokio 0.3 . Address changes are handled using if-watch, the same crate used in #1830, instead of polling "randomly". It also supports reusing ports for outgoing connections and makes the address_translation the resposibility of the transport. While the if_watch module is currently in the tcp crate, when adding the quic transport it might make sense to move it into a shared transport utility crate.

match swarm1.next().await {
RequestResponseEvent::Message {
match swarm1.next_event().await {
SwarmEvent::NewListenAddr(addr) => tx.send(addr).await.unwrap(),
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since we don't use get-if-addrs which I think blocks the thread, it's not guaranteed that the event is emitted immediately

@@ -128,6 +128,9 @@ pub trait Transport {
where
Self: Sized;

/// Perform transport specific multiaddr translation.
fn address_translation(&self, _server: &Multiaddr, observed: &Multiaddr) -> Option<Multiaddr>;

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is what was discussed with @romanb

Copy link
Member

@mxinden mxinden left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't fully followed the whole effort, thus please bear with me.

As far as I can tell this pull request does 3 things:

  • Move libp2p-tcp to async-io.

  • Watch for and react to address changes via if-watch.

  • Allow transport specific address translation.

If I am not mistaken these changes are not directly interdependent and could be proposed as separate pull requests, correct? If that is the case, splitting this pull request would help me as a reviewer and would overall simplify the process.

What do others think?

let socket = self.create_socket(&socket_addr)?;
socket.bind(&socket_addr.into())?;
socket.listen(self.backlog as _)?;
let listener = Async::new(socket.into_tcp_listener())?;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not familiar with the async-io ecosystem. If I read the docs correctly one should prefer async-net over async-io:

For higher-level primitives built on top of Async, look into async-net or async-process (on Unix).

https://docs.rs/async-io/1.3.1/async_io/struct.Async.html

Can you explain why async-io is needed here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is required because futures from async-net take &self, making them impossible to box without unsafe code. Also I don't really see why another library is required, when async-io is so easy to use.

@romanb
Copy link
Contributor

romanb commented Dec 10, 2020

For what it is worth, I'm currently experimenting with this branch, mainly to explore options without a background task and thread, and without the async contructor for TcpConfig, as these aspects bother me a bit. If I manage to come up with something, I will report back here for further discussion.

@dvc94ch
Copy link
Contributor Author

dvc94ch commented Dec 10, 2020

I think there is a good reason for spawning a background task and it's a shame that there isn't much interest in a "simple" solution (async-rs/async-std#912). I guess we can thank carllerche for that as he was the one lobbying to have task spawning removed from Context. (rustasync/team#56)

@dvc94ch
Copy link
Contributor Author

dvc94ch commented Dec 12, 2020

@romanb I assume you gave up on finding a solution?

mdns has an async constructor too, so it really doesn't hurt at this point to make tcp also be an async constructor. I think a background task/thread is required in this case, the only alternative would be for each listener to open it's own netlink socket, which would be wasteful (instead of sharing a netlink socket per transport).

@romanb
Copy link
Contributor

romanb commented Dec 12, 2020

@romanb I assume you gave up on finding a solution?

No, I haven't given up, I'm quite happy with the approach I'm currently pursuing and should have something to share by the end of next Monday.

I think a background task/thread is required in this case, the only alternative would be for each listener to open it's own netlink socket, which would be wasteful (instead of sharing a netlink socket per transport).

It's one background task & thread per TcpConfig::new() though. I just don't think TcpConfig::new() should be such an expensive, thread-spawning operation, even if the standard use case involves just creating one of these. I currently think it is indeed sufficient for a TcpListenStream to directly poll its own IfWatcher, especially because I think an IfWatcher is only needed if the TcpListenStream listens on a wildcard address, not if the listener is listening on a single fixed address. For in the latter case, if the interface of that address goes down, the TcpListenStream will error on accept() pretty much immediately without the need for notifications. So in practice there will usually be at most one (or maybe two for both Ipv4 and Ipv6) TcpListenStreams whose listen sockets are listening on all interfaces and require IfWatcher notifications. Furthermore, since in production configurations it is not even advisable to listen on wildcard addresses, there are neither extra background tasks nor threads nor IfWatchers used when you just bind a node to one or more specific addresses to listen on, as I would want and expect. In any case, this is part of what I will propose next week. If I hit road blocks that cause me to give up, I will certainly post here, too.

@romanb
Copy link
Contributor

romanb commented Dec 14, 2020

I opened my proposal as #1887, which builds on this branch.

@dvc94ch
Copy link
Contributor Author

dvc94ch commented Dec 14, 2020

Closing in favor of #1887

@dvc94ch dvc94ch closed this Dec 14, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants