Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve bandwidth #29

Open
aramperes opened this issue Jan 8, 2022 · 4 comments
Open

Improve bandwidth #29

aramperes opened this issue Jan 8, 2022 · 4 comments
Labels
enhancement New feature or request
Milestone

Comments

@aramperes
Copy link
Owner

aramperes commented Jan 8, 2022

I tested some sftp tunneling on some big files and found that each connection is capped at around 2MB/s. Naturally this depends on the latency and bandwidth with the WireGuard router, but with the official kernel implementation I get around 20MB/s for the same endpoint.

$ sftp user@192.168.4.2
# 20MB/s

$ sftp -P 2222 user@127.0.0.1
# onetun: 2.4MB/s

Parity with the kernel is not a goal since userspace will be a bit slower, but I would like to aim for something like ~50% instead of ~10%.

@aramperes aramperes added the enhancement New feature or request label Jan 8, 2022
@aramperes aramperes added this to the v1.0 milestone Jan 8, 2022
@zonyitoo
Copy link

Hello @aramperes , I may have encountered the same issue with smoltcp which have a lot smaller bandwidth than system network stack. Do you think the key problem is in smoltcp's design or there are any possible improvements could be done in application?

@aramperes
Copy link
Owner Author

@zonyitoo A few thoughts:

  • According to smoltcp's benchmarks, it should be able to handle multiple Gbps, so I doubt the core libary is the main issue
  • Tokio's design clashing with smoltcp might be however; the Device implementation needs to read from a local buffer, and so there is some locking in onetun as well:

tokio::spawn(async move {
loop {
match bus_endpoint.recv().await {
Event::InboundInternetPacket(ip_proto, data) if ip_proto == protocol => {
let mut queue = process_queue
.lock()
.expect("Failed to acquire process queue lock");
queue.push_back(data);
bus_endpoint.send(Event::VirtualDeviceFed(ip_proto));
}
_ => {}
}
}
});

  • The Bus design in onetun was to simplify the code and to make it more extendable (for example for pcap, reverse tunneling, etc.). Since every consumer has to read each message before going to the next one, I'm sure targeted channels between the components would be way more efficient. That's how I wrote onetun originally; however the amount of channels needed between the different parts was getting out of hand. I may tweak the Bus design to allow for direct connections with that in mind.

@jkcoxson
Copy link
Contributor

Would this benefit from removing Tokio completely and switching to crossbeam? That should keep the architecture intact, but I don’t know how much that would improve performance.

Alternatively, crossfire should be an almost drop in replacement for Tokio’s broadcast while not having to remove the async runtime. It boasts higher speeds on its readme compared to even Tokio’s single consumer channels.

@aramperes
Copy link
Owner Author

Moving away from Tokio/async may indeed be an option, however I would need to see clear benefits/benchmarks before making that decision. Maybe it's something worth testing in a branch. Either way, I don't think this would solve the locking issue in the Device implementation, which I suspect is the main bottleneck.

crossfire doesn't seem maintained enough for me to make the switch, and hasn't been udpated past Tokio 0.2.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants