Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance #10

Open
2 tasks done
M0dEx opened this issue Jul 8, 2023 · 3 comments
Open
2 tasks done

Performance #10

M0dEx opened this issue Jul 8, 2023 · 3 comments
Assignees
Labels
enhancement New feature or request help wanted Extra attention is needed performance Throughput and/or latency issue priority-medium Medium priority issue

Comments

@M0dEx
Copy link
Owner

M0dEx commented Jul 8, 2023

The performance as of 0.1.6 is worse than expected.

Between two virtual machines on the same virtualized network (capable of about 30 Gbps of throughput), Quincy only manages:

  • ~ 1.1 Gbps during Server -> Client data transfer
  • ~ 1.3 Gbps during Client -> Server data transfer

with an MTU of 1400 bytes.

Server -> Client

Initial profiling did not yield anything suspicious, other than the fact that QuincyTunnel::process_inbound_traffic takes more time (had more samples) than QuincyTunnel::process_outbound_traffic during the Server -> Client data transfer, which is odd, as most of the data transfered should be going through QuincyTunnel::process_inbound_traffic.

The CPU usage on the Server virtual machine is also only 60 % balanced across all cores, which could mean either either too much IO, or that the Client is the bottleneck.

The CPU usage on the Client is much higher, in the 90s.

Server flamechart:
s2c-server

Client flamechart:
s2c-client

Client -> Server

Pretty much the same behaviour as above - QuincyClient::process_inbound_traffic takes more time than QuincyClient::process_outbound_traffic, which is, again, suspicious.

The CPU usage on the Server side is above 90 %, on the Client side only ~ 70 %.

Server flamechart:
c2s-server

Client flamechart:
c2s-client

Initial conclusions

It seems that the CPU usage on the receiving side is quite high, and that the receiving side spends more time in their respective process_inbound_traffic methods, which is highly suspicious (most of the data transfered should be handled by the respective process_outbound_traffic methods, at least that is my initial assumption).

Further investigation will be needed as to where Quincy client and server spend too much time.

TODO

  • Test the same scenario with lower and higher MTU
  • Find the culprit behind suspicious ratio of CPU time spent between process_inbound_traffic and process_outbound_traffic
@M0dEx M0dEx added enhancement New feature or request help wanted Extra attention is needed labels Jul 8, 2023
@M0dEx M0dEx self-assigned this Jul 8, 2023
@M0dEx
Copy link
Owner Author

M0dEx commented Jul 8, 2023

https://tailscale.com/blog/throughput-improvements/

and

https://tailscale.com/blog/more-throughput/

might be useful in regards to optimizing TUN performance, which seems to be problem at the moment (a lot of time spent in poll_write for the TUN interface).

The changes Tailscale made to wireguard-go are available here:
https://github.com/WireGuard/wireguard-go/blob/master/tun/tcp_offload_linux.go

This was referenced Jul 9, 2023
@M0dEx
Copy link
Owner Author

M0dEx commented Jul 9, 2023

Different MTUs

With an MTU of 6000, the throughput nearly triples, to about ~ 3 Gbps regardless of data transfer direction.

From the flamecharts, it is clear that more CPU time is spent encrypting the packets, but most of the time is still spent in poll_write for the TUN interfaces.

The CPU usage also decreased to about 60 - 70 % on both Server and Client.

Server -> Client

Server flamechart:
s2c-6000-server

Client flamechart:
s2c-6000-client

Client -> Server

Server flamechart:
c2s-6000-server

Client flamechart:
c2s-6000-client

@M0dEx M0dEx added the priority-medium Medium priority issue label Aug 13, 2023
@M0dEx
Copy link
Owner Author

M0dEx commented Feb 7, 2024

GSO/GRO support is work-in-progress: tun2proxy/rust-tun#45

@M0dEx M0dEx added the performance Throughput and/or latency issue label Feb 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed performance Throughput and/or latency issue priority-medium Medium priority issue
Projects
None yet
Development

No branches or pull requests

1 participant