New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SABnzbd package memory usage #2856

Closed
Safihre opened this Issue Jul 29, 2017 · 28 comments

Comments

Projects
None yet
7 participants
@Safihre
Copy link
Contributor

Safihre commented Jul 29, 2017

For a number of years people have reported problems with memory usage of Sabnzbd on Synology. https://www.google.ch/search?q=SABnzbd+DSM+memory
Of course this is expected during downloading, but the problem is that it stays after downloading and isn't being freed.
For example: sabnzbd/sabnzbd#439
Recent: https://forums.sabnzbd.org/viewtopic.php?f=2&t=22867

This seems specific to this package as we don't have reports of this on Desktop or other NAS systems.
Could it be something in the way Python is compiled in SynoCommunity? Maybe there's need to update the flags, maybe it's specific to certain architectures?

@BenjV

This comment has been minimized.

Copy link

BenjV commented Jul 29, 2017

It is not very likely that there is a memory leak in Python.
Only external libraries can have memory leaks, for example c libraries used in modules.

A known leak is for example the lxml library.
This library is used in for example the etree module.

This will happen in every environment, not exclusive on the Synology platform
But of course won't be so much of a problem on windows systems with their vast amount of memory available.

@Diaoul

This comment has been minimized.

Copy link
Member

Diaoul commented Jul 29, 2017

Additionaly, Windows systems are restarted frequently, thus freeing the memory anyway.

@Safihre

This comment has been minimized.

Copy link
Contributor

Safihre commented Jul 29, 2017

While normally I would agree with you both, the users above get these problems within 1 day or less. Our Windows users (including myself) don't restart our systems that often.

Especially interesting is that the memory isn't freed when the last job is finished and the queue becomes empty. On Windows (or other platforms) the memory almost instantly returns back to low values close to what it was at the start of the program.

@Diaoul

This comment has been minimized.

Copy link
Member

Diaoul commented Jul 29, 2017

If you can identify the root cause maybe there is something that can be done on our side, I personnally switched from SABnzbd to NZBget a long time ago because it's too memory hungry.

There is nothing fancy with the way python is compiled and there is no reason why it would behave differently among various platforms as they are compiled with the same flags. Maybe there is something wrong with the toolchains provided by Synology then, or the hardware.

Could you reproduce with various Python versions? Have you tried Synology's version? Can you profile memory usage in SABnzbd? Maybe take a look at GC's debug log?

@Safihre

This comment has been minimized.

Copy link
Contributor

Safihre commented Jul 29, 2017

I do not have a Synology, all I have to go on are user's reports. I cannot reproduce this on for example a Raspberry Pi 3.
Lots of improvements have been made in sabnzbd, reducing memory and CPU usage drastically.
What should I tell users to try? How can they use Synology python for example?

@Safihre

This comment has been minimized.

Copy link
Contributor

Safihre commented Aug 19, 2017

To make sure it's not SABnzbd, with the help of of a user that has a Synology, we used the identical settings within SABnzbd (copied settings file, only changed download locations) to download the same NZB on both a Raspberry Pi 3 and his Synology.
After the download is finished, the Pi3 goes back to ~40MB memory usage while the Synology reports 456MB used.

Not sure what to do to debug this?

@ymartin59

This comment has been minimized.

Copy link
Contributor

ymartin59 commented Sep 2, 2017

@Safihre I would change title as this issue is probably limited to sabnzbd

@ignoremenow

This comment has been minimized.

Copy link

ignoremenow commented Sep 2, 2017

Stop then start and going to the main page - 21MB
Clicking around config then going back to main page - 34MB
Add a nzb and just download it, when completed - 60MB
Add a nzb download/verify/unrar/delete - 63MB

@ignoremenow

This comment has been minimized.

Copy link

ignoremenow commented Sep 2, 2017

with pickle
Add a nzb download/verify/unrar/delete - 56MB
withput pickle
Add a nzb download/verify/unrar/delete - 60MB

@ignoremenow

This comment has been minimized.

Copy link

ignoremenow commented Sep 2, 2017

These test's where done about 5 hours ago. Right now mem usage is at 33MB.

@Safihre Safihre changed the title Python memory usage SABnzbd package memory usage Sep 2, 2017

@BenjV

This comment has been minimized.

Copy link

BenjV commented Sep 2, 2017

This seems normal to me.

SABnzbd is a python application and python uses a garbage collector for memory management.
This means that used memory will not be released at ones but it will take some time to be release.
This is also tied into the OS en will be done faster if the OS needs more memory then available.

@Safihre

This comment has been minimized.

Copy link
Contributor

Safihre commented Sep 2, 2017

Python does not have a garbage collection in the normal sense that you describe, it does reference counting and as such memory is freed as soon as objects are unused.
It does do a periodic check for cyclic-references, but 'periodic' turns about to be about every second when observing it using the gc module. We also removed almost all cyclic references a while ago, which is also shown by the gc traces that nothing is really cleaned up because the reference counting already took care of it.

@BenjV

This comment has been minimized.

Copy link

BenjV commented Sep 2, 2017

It is garbage collection called by the python developers, so who am I to change that name.
If you want to know how it functions and why I said that behavior a observed by "ignoremenow" is normal you could read this article.

https://www.quora.com/How-does-garbage-collection-in-Python-work-What-are-the-pros-and-cons

By the way I have use SABnzbd on my Nas and it doesn't show abnormal memory usage at all.

@desperado591

This comment has been minimized.

Copy link

desperado591 commented Sep 2, 2017

Safihre asked me to post some of my observations regarding the memory usage on my Synology Diskstations, so I did some test runs.

After Start | Clicking around | Add nzb/download | After restart of SAB add nzb/download/verify/unrar/delete

DS1815+ with 8GB RAM | 43MB | 87MB | 171MB | 508MB
DS1513+ with 4GB RAM | 37MB | 52MB | 145MB | 230MB
DS212+ with 512 MB RAM | 24MB | 35MB | 95MB | 166MB

All DiskStations run on DSM 6.1.3 and SABnzbd 2.2.1

The two big Synos stayed at that RAM level idling for an hour now. Interesting fact: I just startet a new quite big download on the 1815 and the RAM usage dropped from 508MB to 390MB and stays there at the moment after the download has finished.

I hope that helps.

@BenjV

This comment has been minimized.

Copy link

BenjV commented Sep 2, 2017

The DS1815 en DS1513 are both ATOM processors, so they are basically have the same python build.
The difference in memory use is dependent on the total amount of memory available.

The most likely explanation is that SABNZbd spawn processes that use memory (things like unpar).
Linux keep those process in memory until the memory is needed for other processes.
So if the system has memory enough, then it just keeps the processes in memory.
Normally you see that as processes that are "asleep", but depending on how they are spawn they do not show up as different processes.

@desperado591

This comment has been minimized.

Copy link

desperado591 commented Sep 2, 2017

Well that sounds logical to me, but how comes the fact that the usage dropped when I started another download (which was way bigger than the first one)?

@BenjV

This comment has been minimized.

Copy link

BenjV commented Sep 2, 2017

Simple, SABnzbd needs more memory, so the OS drops some unused processes and gives SABnzdb what it needs.

The memory management of all OS-en are complicated and follows many rules.
To be honest, don't bother with memory used unless the system start to swap, only then you are in trouble.

@Hubfront

This comment has been minimized.

Copy link

Hubfront commented Sep 11, 2017

I did report that issue two times here in the last years:

sabnzbd/sabnzbd#177

sabnzbd/sabnzbd#439

I also tested sabnzbd src/python on Windows platform and there memory was freed correctly by python after the job was finished. So it is no issue with memory management by sabnzbd.

Instead it seems, that python may act differently regarding memory management on some platforms and that it is platform dependend, if one ends up with correct behaviour or not. This memory problem is one reason for me to use alternative software on synology, although i really like sabnzbd.

@Hubfront

This comment has been minimized.

Copy link

Hubfront commented Sep 11, 2017

The memory waste is definitively a python issue and not a sabnzbd issue. Look the past issue reports here including mine mentioned above. There seems to be a problem with the memory management in the compiled version of python for synology. There are tests on windows, where no problems occur, so it cannot be a programming issue/bug. Again, for this, look here, e.g.

sabnzbd/sabnzbd#439 (comment)

sabnzbd/sabnzbd#439 (comment)

sabnzbd/sabnzbd#439 (comment)

sabnzbd/sabnzbd#439 (comment)

@BenjV

This comment has been minimized.

Copy link

BenjV commented Sep 11, 2017

You still don't get the picture how memory management works on Linux systems.
As long as the system is not swapping processes there is not problem at all.

The idea that unused memory should be returned to the OS is just a windows stupidity.
On Linux that is only done when the OS ask for memory, in all other cases the process stay in memory even if it stops runnen.

@Hubfront

This comment has been minimized.

Copy link

Hubfront commented Sep 11, 2017

@BenjV

what are you talking about? It clearly works with nzbget on linux. There are a functions in every language to free memory. Read the links above its a problem with python on some platforms.

@Safihre

This comment has been minimized.

Copy link
Contributor

Safihre commented Sep 11, 2017

@Hubfront, while I initially didn't believe @BenjV it seems he is right. Several answers on stack overflow seem to suggest the same: https://stackoverflow.com/questions/21433976/free-not-freeing-memory-in-embedded-linux
It depends on how the OS implements memory management and which way python calls it.
It seems if the memory was needed, it would be made available.

@Hubfront

This comment has been minimized.

Copy link

Hubfront commented Sep 11, 2017

Indeed interesting links. So it would not get to the point, where swapping occurs, because the system grabs freed memory back from the process mem pool, when it needs it. But when i use nzbget on my Synology, it seems the process was accumulating memory and freeing it.

@desperado591

This comment has been minimized.

Copy link

desperado591 commented Sep 12, 2017

But how is the fluctuation in memory usage explainable that I sometimes saw on SABnzbd described in my last post here?
I swear that I have experienced the following situation: SAB is idling and using let´s say 300MB of RAM, because it managed a complete download an hour ago. But when I am starting a new download, even bigger than the old one, the RAM usage suddenly drops down to for example 230MB instead of 300MB. I must admit that I only saw that happening occasionaly as the memory usage normaly increases, but sometimes it decreases and my OS was far from swapping.

@BenjV

This comment has been minimized.

Copy link

BenjV commented Sep 12, 2017

It is simple and I explained it already.

SABNZBD does not know it appears to have a lot of memory because it is in the process pool and not owned by SABNZBD, but tools like "ps" are reporting it to be owned by SABNZBD
Then SABNZBD needs memory when it starts a download and asks it from the OS and the OS grabs it from the memory pool.
The OS grabs 300MB and gives 230MB to SABNZB because 230MB is asked at that moment.

On top of this process there is the garbage collection system of python which confuse things.
Further more, the difference in CPU architecture is a factor, because memory management is tightly coupled with the hardware for performance reasons, so you can also see differents in behavior there.

Memory management is one of the most complicated subject of a computer OS and you should be careful not to come to a too simple conclusion like "python is doing it wrong".

If python had a memory leak then it would not stop by using lots of memory but every python application would if it keeps running eat all the memory on the system and eventually stop working.
This is clearly not the case with millions of machines running python applications all day long.

@desperado591

This comment has been minimized.

Copy link

desperado591 commented Sep 12, 2017

So maybe the indication of used RAM in SAB is missleading, as it isn´t using that RAM during idle but it shows only the reserved RAM? And as soon as SAB is waking up and downloading again, it shows the actual used RAM, which can be lower than before? Is that the lesson I have to learn? :-)

@BenjV

This comment has been minimized.

Copy link

BenjV commented Sep 12, 2017

No it is not misleading, you just don't interpreted it correctly.
Most likely it is asking the OS this information and then you get the same answer as tools like "ps" give you.
You have to remember that memory management is the most crucial processes within an OS.
It has to be as efficient as possible and is at the same time a very complicated process.
If the Linux guys would build in reporting facilities like you would want, it would slow down the system quit severely.
In windows this is done and you can see how much processing power a windows machine needs to run smoothly.

@Safihre

This comment has been minimized.

Copy link
Contributor

Safihre commented Sep 12, 2017

So I will close this, unless users actually report swapping and slowdown happening there is no problem.

@Safihre Safihre closed this Sep 12, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment