-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Standalone mavcesium uses a lot of resource #34
Comments
I have yet to profile anything, but I'm sure we can save some of those resources :) Unfortunately (as you know) pyMAVLink is a bit of a CPU hog. But knowing where the memory and CPU is being used is a good start! |
Ouch, that's 724Mb resident after it's been running for a while. Restarting it goes back to low memory usage (40mb) but climbs slowly. Must either be a memory leak, or something is creating resources and releasing previous ones. |
Changing the wait time as suggested doesn't seem to make any difference. |
Nasty! Thanks for reporting this. I'm sure I'll be able to track something down... |
I haven't had a chance to profile the code yet, but looking over it there were a couple of places that needed to be fixed. One being a infinite sized queue with no consumer! =S |
Spent pretty much all day on this with no real luck.... Some things to note:
Using guppy to try and profile the memory usage: launching with no websocket connections:
Pretty stable around 161264 bytes used... Adding a connection:
Memory usage instantly increased to 435728 bytes...
Memory usage has increased again while serving no more connections...
So clearly something within the tornado server is not cleaning up after itself... |
with regards to CPU usage we can see that most of the time spent by pyMAVLink is parsing the incoming char buffer
and finally we see most of the time is spent decoding the message buffer into (maybe) a message within
There you have it... So if you have ever wondered where pyMAVLink spends most of its CPU time, wonder no more! |
re: cpu that's a really good target for improvement, for pymavlink! Worth posting as an issue there? |
At the moment there aren't any changes that I would expect to make a difference in the memory_fix branch... Only a few small changes but nothing to fix what you are experiencing... It's interesting to note the memory climb without any clients connected, in my testing I didn't see anything nearly as pronounced as the graph you have. |
My first thought was that the mavlink connection was keeping some sort of memory based log of received messages, causing the increase usage over time. I did go digging and didn't find any suspicious code... Not to say the issue isn't with pymavlink. I just didn't find it while looking for it. |
The majority of the CPU usage reduction comes with the reduction in message stream rate... less messages to parse means less CPU used. The issue is that the display becomes a bit 'blocky' with updates. We can tune the message rate to help balance the CPU usage if that would help (or make the rate a launch option?) |
Ah OK. Memory usage of mavlink-params, which is a very simple dronekit app to manage parameters, has a completely static memory usage. So the problem isn't inherently present in pymavlink. |
Thanks for testing. I'm pretty sure it's related to tornado but just need to pin down where... |
Right... so I'm pretty sure its in pyMAVLink. By commenting out |
Is that a bug in pymavlink, or a feature? |
I'm going to call it a bug... The behavior we were seeing is the result of a bytearray being extended over and over again.
The issue is that every time this code is run there is always a complete message waiting to be decoded. Because of this the final
which equivalent to what is now in the MAVCesium server code. |
pyMAVLink issue raised: ArduPilot/pymavlink#130 |
Did you also observe increased CPU utilization over time? |
112Mb resident memory is a lot for a small embedded computer (for example an rpi zero only has 384mb after graphics). Can this be reduced?
The cpu is presumably down to pymavlink?
The text was updated successfully, but these errors were encountered: