-
Notifications
You must be signed in to change notification settings - Fork 224
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Server performance & optimization #455
Comments
Hi, https://gist.github.com/dingodoppelt/802c40b1cb13c75d96f38b9604fa22df cheers, nils |
Thanks @dingodoppelt Could you please describe the test session/environment? (ie, how many clients connected, which hardware/Operating System you were running the server on, whatever you feel noticeable) |
@sthenos you mentioned in https://www.facebook.com/groups/507047599870191/?post_id=564455474129403&comment_id=564816464093304 that you're running the server in linux now. |
I tested with 12 clients connected from my machine with small network buffers enabled on 64 samples buffersize. |
Are you still interested in these data? I can run a few tests on ubuntu over the weekend. |
One quick comment @WolfganP -- for some reason you build command line (rather than a simple EDIT Dawn strikes... Yes, it does have So if you're on anything but Windows, you'll probably want to Final edit to note: jamulus.drealm.info is running with profiling. I'll leave it up over the weekend so it should amass a fair amount of data. I'll run the |
A different view should come from the Rock and Clasical/Folk/Choir genre servers that I've just updated to r3_5_9 with profiling.
They probably won't show much OPUS usage but this should show anything that's "weird" with server list server behaviour (although they only have about 20 registering servers, until Default). I wasn't sure what |
@pljones yes, I added |
Standard build:
This just changing the binary to "Jamulus", IIRC:
This was
Had a few people tonight noticing additional jitter. Not everyone... Those who noticed - myself included - had just upgraded to 3.5.9. No idea why... (I "fixed" it for the evening by upping my buffer size from 64 to 128.) 14 clients connected to the server and it's looking like this in top:
Mmm, I guess those
but it copes with only three in heavy demand. 129 and 131 seem left out. Let's see what the gprof looks like in the morning :). |
OK, I decided to restart the central servers without profiling before I totally forget, so all the number are now in. |
Thx for the info @pljones, good to also have some performance info for the Central Server role. Regarding the info on the audio server role, it seems it confirms the CPU impact of CServer::ProcessData and some Opus routines (I assume that as a result of the mix processing inside CServer::OnTimer), and that make sense (at least to me). Another item I think needs some attention (or verify is already optimized) is the buffering of audio blocks to avoid unnecessary memcopies. But still reading the code :-) |
Of course @storeilly the more information the better to compare performance on diff use cases and verify common patterns of CPU usage, and direct optimization efforts. |
Here is a short test on GCP n1-standard-2 (2 vCPUs, 7.5 GB memory) ubuntu 18.04. |
Overnights with 1 or 2 connections... Choir meeting later so will run again after that |
Thanks @storeilly for the files, but those last 2 indicate a period of app usage extremely short, it doesn't even register significant stats to evaluate (even the cumulative times are in 0.00). |
Oh sorry about that, maybe because I had them running as a service. I saw the message just before the choir meeting so ran this up as a live instance. We only had 8 connections for about 90 mins so I hope it is of some use. |
Thanks @storeilly, that latest file is more representative of a live session and similar to the others posted previously. Thx a lot for sharing. |
For your info: I will change the ProcessData function now to avoid some of the Double2Short calls and have a better clipping behavior. |
Excellent @corrados , we can keep running profiling sessions here and there and measure the improvements. Another thing that I couldn't still pay sufficient attention is the management of buffers of audio blocks, to make sure unnecessary memcopies are avoided. Do you recall how is it implemented? |
I will do further investigations when I return from my vacation. |
Is there a possibility somebody can build a Windows exe with profiling config? A friend is trying a multi server load test on Windows tomorrow evening? |
https://docs.microsoft.com/en-us/visualstudio/profiling/running-profiling-tools-with-or-without-the-debugger?view=vs-2017 The Windows build doesn't seem to like
Leaving So, Qt Creator under Windows has an "Analyze -> Performance Analyzer" tool. First thing, it kicks off the compile... Before it tries to run, it says |
https://gist.github.com/dingodoppelt/9fecd468be2176dacd6d6d3ae3d1d078 here is another one. |
Thanks dingodoppelt. In your profile log the ProcessData() function is much lower in the list compared to the profile given by storeilly in jamprof03.txt. So maybe the Double2Short optimization may already have given us a faster Jamulus server. |
I've been dabbling with getting my service units to run at real time priority. The 2013 documentation linked from one of the guides is actually out of date. There's no need to fuss with changing cgroups from within the service unit - the latest kernels are quite happy dealing with individual slices.
Having said that, I noticed something when checking the status with
So that starts up the server real time quite happily:
What I couldn't follow in the thread flow of control was why CSocket loses real time and yet JamRecorder retains it. I'm looking to drop the priority of the jam recorder, really - and I'd have thought the socket handling code wanted to retain it? Here's the patch that names the CSocket thread:
(I don't know what CHighPrecisionTimer is used for but I didn't see it get a thread - maybe it's for short-lived stuff?) |
The way I understood the server code, CHighPrecisionTimer is the base for the "realtime" processing of client audio mixes at ProcessData (one of the functions topping the performance charts consistently) via the OnTimer interrupt occurring every 1ms (https://github.com/corrados/jamulus/blob/f67dbd1290a579466ff1f315457ad9090b39747e/src/server.cpp#L792) That async processing of data via the timer, was probably why the early parallelization test didn't work as intended. |
That's good news, thanks :-). |
we have a choral group with ~100 members who would like to use one of our jamulus instances with --numchannels=120 . at the moment, our jamulus 3.5.9 only supports 50. which version can i download or compile to take advantage of numchannels greater than 50? thanks in advance. |
You can use the latest version, 3.6.0, which allows --numchannels of up to 150. To support that number, you will also want to run the server on a dedicated machine with good bandwidth and several CPU cores and to enable --multithreading |
You might also try my fork at https://github.com/kraney/jamulus where I've been able to boost performance further on lesser hardware. There's been some wishing for independent confirmation of the fork's performance. |
excellent. thanks. i will try 3.6.0 first and report back presently. |
I made a test with ~ 20 to 30 jamulus/jack/fake sound card/headless clients with the latest 3.6.0 fork (and some mini changes from my fork) on a dual core virtual machine (Intel® Xeon® E5-2660V3) and the latency was so so. htop reported around 70% usage per core. The server was started with -F and -T Before that I tried to connect my interface to jack on a laptop running a 6th gen intel core m5. The client went almost unresponsive and one core on the server was at 99%. Even chat messages from another device were delayed. Does that mean that slow clients can delay the server? |
Hi, |
I would say, no. But it's hard to find out this bottleneck in your scenario. What is interesting is that on my 4 CPU core machine I could server 100 clients, see #455 (comment). |
When I disable small network buffers I can fit way more clients. I did my tests with small network buffers enabled (buffersize 64) and this really kills most of the servers (which are virtual machines in the cloud) at around 20 - 30 clients. The interesting part was that I could overload just one server (by connecting 35 clients, it showed pings and delays as >500ms) but not the hardware the servers were running on (the other server still did fit 10 clients before it broke). I wonder if it is possible to give that spare processing power to just one server to fit as many clients as the hardware can handle. |
This is an interesting point. As far as I understood, you are running a virtual server. So you cannot say that you reached the limit of the hardware but the limit of your virtual server. Maybe the virtual server limits CPU access an a per thread basis. That would explain your described behaviour. I think if we really want to tweak the multithreading performance of the Jamulus server even further, we have to do it on a real hardware and not on a virtual server because the virtual server has too many unknowns when it comes to resource sharing. |
Two points here - it's definitely true both AWS and GCP offer instance types where you get fractional CPU cores; your instance gets "credits" that accumulate over time, and when you are actually using the CPU you spend those credits. That lets you "burst" and use the whole core for a little while, but with sustained use you'll run out of credits and get swapped out for another VM. These instances aren't a good fit for Jamulus if you're trying to maximize the number of clients you can support. There's an alternative to switching to bare hardware, which is to use an instance type that doesn't limit you to fractional cores. Second, getting 10 more clients on a second server instance doesn't imply that there's a way to get the first instance to grow by 10 more clients instead, because the amount of work doesn't grow linearly with the number of clients. It grows by n^2. So going from 35 clients to 45 clients on a single server adds 800 new mix operations, while starting a second server instead having 10 clients only adds 100 mix operations. Having space for 100 more mix operations only gets you from 35 clients to 36, if it's all in the same server process. |
I am wondering if we are taking the wrong perspective on performance with cloud services. Each cloud service has different approaches to maximize utilization of their computing and networking resources. Jamulus is unique because we care about real-time performance. Most (the ideal) cloud apps care more about lots of computing in burst and less about real-time performance (or real-time to these apps are in the 100s of milliseconds). Task switching means buffering and we know buffering means latency. As we measure the load for additional clients, we should be looking at how buffering and latency changes. |
Folks: I hope this can help. I am willing to spend the extra for a dedicated 4 cpu for a short while if you thank that will help. |
hi there, I have been playing around with sysbench, a tool for performance measurement and i found that cloud server performance is pretty good cpu-wise but awful for memory performance where my dedicated machine really shines. i ran this test on my home machine:
and on my cloud server:
this doesn't look too good in comparison. maybe this is the bottleneck? do you think sysbench could be a reliable tool to measure server performance instead of trial and error or are there any other tools i could try? |
How much memory is used by a Jamulus server thread? |
On my windows system at home a server uses only about 60 MB memory. |
I meant Jamulus memory usage, as you've given. The test was about memory throughput, if I read it correctly. If Jamulus isn't memory constrained, then the test shown won't be representative of Jamulus performance. |
I just wondered if that might be the issue with the cloud servers. The CPU performance is fine and doesn't really deviate from what I measure on real hardware. The only thing I could find using sysbench was the restricted performance on memory throughput in comparison to real hardware so I figured this might be another thing to look at since cloud servers die long before the CPU is used up. |
On average, that may be true. Are you getting a reading for consistency of performance - i.e. how much the CPU performance deviates between maximum throughput and minimum? As noted above, it's that stability that Jamulus needs and which directly affects its capacity. |
@dingodoppelt I don't know if you are on facebook, but there is a report about successfully have 53 clients connected to a Jamulus server on a 4 CPU virtual server: https://www.facebook.com/groups/619274602254947/permalink/811257479723324: "Had 53 members of a youth orchestra this evening on Jamulus (and another 15-20 listening on Zoom). Took about 90 minutes of setup so we only got through a reading of Jingle Bells at the end but it was a great first step! AWS 4 vCPU server hit ~55%." |
@corrados : my server does this, too. but not with every client on small network buffers. I've played on servers with around 50 people, but you can never tell if everybody has small network buffers enabled. in my tests i connected every client with the same buffersize and small network buffers enabled. It only worked for me on dedicated hardware (namely WorldJam, Jazzlounge servers) |
haven't done any testing / thorough research here yet, but just a heads up: there are also several kernel parameters for the UDP networking stack and general network parameters that could be tuned with sysctl that might have a positive effect: https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt |
@sbaier1 I am out in the Internet frontier (aka far from the small distances at Europe) and see the network behavior being a dominant contributor to latency dependent performance. I'd be interested in a discussion about what can be done to quench traffic and discard packets. These mechanisms might be a good way to improve the performance (at the expense of audio interruption, which would be happening anyway). Especially with regards to a different thread on buffer backups (they called it buffer-bloat), the only way to manage problems in the network with packet backup at some routers would be creating some code to detect backups and quench traffic. I have some musicians that will "tolerate" 20-70 ms latency rather than not have music. Actively managing the packet rate up at 40+ ms would greatly improve the experience. (Note, I am thinking that some of the buffer back-ups is the interaction between our UDP traffic and other people's TCP cross traffic.) |
There's a new PR for multi threading: #960 |
thanks @ann0see for pointing me to this thread. i could not read all comments here but i want to add my findings to the thread. i noticed difficulties fitting more than about 17 clients on my hetzner cloud vServer. even though i tried configurations from 2 to 16 cores. my guess is that the cpu cores on my cloud server are not as strong as the ones the current multi threading code was developed for, i see a comment mentioning amazon cloud servers there. i only tested this change with up to 21 clients. and i see much better cpu usage between cores. |
Hi all - in an effort to rationalise the Issues list into something just for actionable work items (we hope to apply milestones and things at some point), I'm moving this to a discussion if that's OK. |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
Follows from #339 (comment) for better focus of the discussion.
So, as the previous issue started to explore multi-threading on the server for better use of resources, I first run a profiling of the app on debian.
Special build with:
qmake "CONFIG+=nosound headless noupcasename debug" "QMAKE_CXXFLAGS+=-pg" "QMAKE_LFLAGS+=-pg" -config debug Jamulus.pro && make clean && make -j
Then run as below, and connecting a couple of clients for a few seconds:
./jamulus --nogui --server --fastupdate
Once disconnecting the clients I gracefully killed the server
pkill -sigterm jamulus
And finally run gprof, with the results posted below:
gprof ./jamulus > gprof.txt
https://gist.github.com/WolfganP/46094fd993906321f1336494f8a5faed
It would be interesting to see those who observed high cpu usage run test sessions and collect profiling information as well to detect bottlenecks and potential code optimizations, before embarking on multi-threading analysis that may require major rewrites.
The text was updated successfully, but these errors were encountered: