Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for multiple interfaces #55

Open
batonius opened this issue Oct 11, 2017 · 11 comments
Open

Support for multiple interfaces #55

batonius opened this issue Oct 11, 2017 · 11 comments

Comments

@batonius
Copy link
Contributor

As mentioned in #50, I've been looking into support for multiple interfaces/devices. My overall plan is to leave the current EthernetInterface structure to represent a network interface with ARP cache, MAC/IP addresses, and a Device assigned to it, add InterfaceSet with a ManagedSlice of EthernetInterfaces and ipv4_gateway in it, and move all the packet processing logic from EthernetInterface to InterfaceSet.

I don't see any point in trying to do it statically, that means we should rely on trait objects a lot and have to accept the inevitable performance hit. The problem is that the current Device trait doesn't work well as a trait object because it uses the associated types by value. Since we need to call destructors on Rx/TxBuffer, and Box is the only way to own a trait object, refactoring Device::transmit to return something like Result<Box<AsMut[u8]>> looks promising, but using Box requires alloc, something we can't afford here.

All that means the Device trait should be redesigned into something more suitable for using as a trait object, maybe something like #49 (comment), but with the functions passed as function pointers directly to transmit/receive, if it makes any sense. Maybe we could have Device to own the latest received packet and have a method to return a slice to it, and use a functional argument for transmit.

I'm not sure if I should go ahead with this approach, so I wonder if you have any ideas.

@whitequark
Copy link
Contributor

So... why does smoltcp need to support multiple interfaces? It's not only perfectly doable but also IMO more convenient to build routing on top of it. That is, packets originating in the OS or destined to the OS are routed according to the OS' routing table, and the rest is handled using raw sockets, which also provide buffering, because it's not really possible to do routing without buffering with smoltcp given how ownership works in it.

@batonius
Copy link
Contributor Author

That would mean I have to use an instance of SocketSet for each EthernetInterface, so sockets bound to 0.0.0.0 gonna be problematic, but I think I can just duplicate them. I'll try to implement this approach and report back with results.

@whitequark
Copy link
Contributor

Once you figure out a design we might lift it into smoltcp, but right now I feel like it's not the right direction for this project to delve into.

@batonius
Copy link
Contributor Author

After trying to implement it for some time, I've decided against building on top of smoltcp : coordinating several independent TCP/IP stacks under a single abstraction just doesn't feel right.

Instead, I opted for a simple ad-hoc implementation of loopback: redox-os/netstack@b54ead7 . Basically, my ArpCache always returns 00:00:00:00:00:00 for 127.0.0.0/8 and local IPs, and my Device replaces zero dst MACs with the local MAC and puts the packet into the input queue I already had. While it solves my current problem, I think the idea of encapsulating multiple interfaces in Device can be generalized into a full-featured support for multiple interfaces which would be much cleaner than my original idea.

Specifically, I'm thinking about changing the Device trait to return and accept usize iface_id with each transmit/recieve, binding interface configuration (hw address, IPs) to specific iface_id, changing ArpCache records to have iface_id filed. It's possible loopback interface will have to be explicitly marked as such because of the way it handles /8 loopback subnet. After that, I think it will be quite straightforward to implement rudimentary egress routing based on iface_ids. This way there will be no overhead for trait objects, and the existing code will work with fixed iface_id = 0.

While I'm happy with my stop-gap solution for loopback for now (Redox doesn't support multiple NICs yet), I can investigate the described approach once #57 is done if you think it could work for smoltcp.

@whitequark
Copy link
Contributor

whitequark commented Oct 25, 2017

I don't think this works. For example, an Ethernet port has an EthernetInterface on top of it, and a PPP link has a PppInterface on top of it. Now you can't make a socket that binds to either.

But I just realized something very nice. We don't need to add anything new to smoltcp for it to support multiple interfaces! You can simply use the same SocketSet with multiple interfaces! Naturally, sockets will filter out packets destined to addresses they aren't bound to, and if we use BTreeMaps for port dispatch it's efficient too.

@batonius
Copy link
Contributor Author

Ethernet port has an EthernetInterface on top of it, and a PPP link has a PppInterface on top of it. Now you can't make a socket that binds to either

Why not? If we refactor EthernetInterface into, well, EthernetInterface and IpInterface, we can have the latter to use EthernetInterface and PppInterface for link-level. The key here is that since the number and the types of link layer technologies are predefined, we can have IpInterface to hold Option<EthernetInterface>, Option<PppInterface>, Option<GreInterface>, etc., without using trait objects. Each of these interfaces would use a single Device encapsulating several physical links via iface_id.

You can simply use the same SocketSet with multiple interfaces!

I've played with this idea, but decided against it. One of the problems here is egress packets, especially for sockets like DNS server, i.e. UDP bound to 0.0.0.0:53, sending packets potentially to anyone. Even if we had a way for an interface to peek into socket's queue to cherry pick packets it can route, which we don't, it would require number-of-interfaces * number-of-egress-packets checks for each poll(), each check involving matching against several CIDRs. Meanwhile using something like https://github.com/hroi/treebitmap with caching would require searching for egress interface once per session in the best case, once per packet in the worst.

@whitequark
Copy link
Contributor

we can have IpInterface to hold Option, Option, Option, etc., without using trait objects. Each of these interfaces would use a single Device encapsulating several physical links via iface_id.

I'm really not happy with this approach but if I don't come up with anything better, so be it.

each check involving matching against several CIDRs

Matching against CIDRs is literally a single AND for IPv4 and two (four on 32-bit platforms) of them for IPv6. That's extremely cheap.

@whitequark
Copy link
Contributor

whitequark commented Nov 13, 2017

  • I've realized that using an EthernetInterface for loopback is, of course, a wrong approach, and we should have a LoopbackInterface for that. This precludes your approach since a LoopbackInterface doesn't need a Device; it only ever needs a single buffer, and every egress packet gets immediately squeezed back in.
  • I increasingly like the idea of using the same PortSet with multiple interfaces. What I think we can do to speed up matching is to add some form of caching/memoization that makes checks much cheaper for most sockets, yet still lets ones bound to 0.0.0.0 or :: accept packets from anywhere.
  • Thinking more about egress on sockets bound to 0.0.0.0 or ::, that's a more serious problem than I first thought, and I don't know at all how to deal with that right now.

@batonius
Copy link
Contributor Author

we should have a LoopbackInterface

Sure. The idea is that instead of making it ad-hoc, we should generalize it into some form of support for multiple interfaces.

I increasingly like the idea of using the same PortSet with multiple interfaces

I still don't :) It appears to be unnecessary wasteful and counterintuitive, stuff that require centralization like 0.0.0.0 sockets and routing would be hard to implement, even more so to implement effectively. Besides, if you want some kind of caching you have to have some way to identify interfaces anyway.

I propose we stick to some form of the classic approach:
stack

Specifically, I have in mind centralized IP layer that operates on a set of interfaces each identified by IfaceId, processing ingress packets each marked with IfaceId of the interface it has been received on, with each egress packet going through routing, which (with a lot of caching) marks it with IfaceId it should be sent from. Concrete interfaces could be abstracted with the two-layer scheme *Interface + *Device, just like every NIC driver in the classical scheme can have several interfaces it manages. This approach would allow to add, remove and reconfigure interfaces on the fly.

@whitequark
Copy link
Contributor

Okay, I understand your proposal more clearly now. I will reconsider it.

@whitequark
Copy link
Contributor

I have reconsidered it and, indeed, it makes perfect sense. Also, we can just have a ManagedSlice<'a, Managed<'b, for<'c> Device<'c>>> to avoid using Box.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests

2 participants