-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance Testing #14
Comments
Suggestion: Stick it in front of a game and see what happens. Could use Xonotic or Supertuxkart from Agones as an example (in fact, could be an interesting example to have on hand anyway). |
Working on an example with https://iperf.fr/iperf-doc.php#3doc to demo sending lots of packets through - this can be then turned into a proper perf test. |
About to submit a PR, so that everyone can replicate locally, but wanted to capture this. When running iperf3 against itself on my local machine (no quilkin proxy), I get this result:
Very little jitter, and nothing dropped. Unfortunately, when I do the same test with Quilkin in between I see the the following:
As you can see, quite a bit of packet loss and some pretty nasty jitter. I've run this a few times and seen similar results. I'm not 100% sure it's Quilkin just because I had to do some TCP tunnelling to get iperf3 to work with Quilkin between, and I'm assuming I got it all correct, but worth noting. I capture Quilkin's prometheus metrics, along with all the logs, and they list no packets dropped, or errors. iperf3.zip @iffyio when you did you load tests, did you get reports back on packets dropped? The good new is - CPU is low for the number of packets I'm sending, and memory is very stable. |
Packet loss would be expected if quilkin isn't keeping up with what iperf3 is sending - its recv buffer gets full and the os drops any new packets, which would be fine. As long as quilkin is forwarding all packets it receives which sounds like thats the case. In my previous tests I didn't explicitly check for packet loss but I was pretty sure some where being dropped since the sender there sent as fast as was possible |
Another idea benchmarking: https://www.jibbow.com/posts/criterion-flamegraphs/ In general https://github.com/tikv/pprof-rs looks pretty awesome for looking into things. Shame it doesn't do heap analysis as well. |
🤔 CPU is definitely not a bottleneck here. But the point of setting this up is so we can dig into these type of issues. We can keep building out tooling and see where we end up. I'll adjust the PR to be something that passes a bit better, rather than an overload, and we can take things from there. |
Dropping the parallelism to 75 (on my laptop at least) means that packets don't drop -- so it does seem like an overwhelming of the system issue. I am also realising that i am not smart, and I'm reading this jitter results badly. I read |
* iperf3 performance testing example Work on #14 * Extra tweaks. * Review update: trap exit and cleanup.
Moving this to high priority because I'd to have to a |
Going to close this, as we now have a couple of benchmarks, and we can track any of them in newer more specific issues. |
We should have some kind of repeatable performance harness, ideally running on a regular basis, so we can see if have performance problems or regressions.
The text was updated successfully, but these errors were encountered: