-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible Memory Leak #963
Comments
This may indeed be the issue that has been grinding my server to a halt every two days. |
Thanks for opening an issue, have not encountered this with Docker container or Windows myself. Does task manager show a specific sub-process that has not closed down correctly? Such as an unclosed FFmpeg process or something like that? |
I know this one may be tough or impossible to track down. I didn't see anything that stood out as far as ffmpeg processes went. I may have another batch of small files I'll be running through but the Process Memory has remained reasonable for awhile now. |
Ok thanks, yeah memory issues are always a pain but let me know if you re-encounter the issue and can have another look at the process and subprocess details, |
Right now my server process shown via Tdarr UI is at 2100/2500mb. It fluctuates but never goes down significantly, just climbs slowly. If there is anything we can provide you to help with this, sounds like @Brendonwbrown and myself may be able to do so. |
I am doing all my music files so like before with my small security videos, I am processing a lot of small files. For me that's been a common factor. |
@Brendonwbrown yes it's normal to have multiple runtime processes. Each time a subprocess is created, a new runtime process appears. A node with 1 worker running will have at least 3 runtime processes. Likewise on the server there's a lot of multi-threaded stuff (such as file scanner) each creating their own subprocess, but should close down once done. |
@kramttocs are you adding more files to Tdarr? This will cause more ram usage over time. |
Kind of but I don't think that's the root of what's happening here. I understand that the more it has to watch will increase the ram usage but I would expect that to be either 1) consistent or 2) temporary without the need for a restart.
The problem with me thinking it's either of these is that if I restart the memory usage stays pretty low and reasonable so I don't think it's 1. With 2 it never goes down when it gets into this situation. I know I keep mentioning a lot of small files but that's really when I've seen it. The two situations are:
|
I've made some changes to this in 2.20.01 (just released), so hopefully that fixes/improves things for you. |
I'm unsure if this is related but I'm having similar issues. I've had a look through existing and historical issues. I've been having issues for the past month or so whereby running the latest version of Tdarr in a Linux Container (LXC) on a Proxmox node with nothing else running in the container causes the memory usage to increase over time and ultimately hit the allocation and becoming unresponsive. I've increased the memory allocation to compensate but it reaches 2GB of memory usage within the space of 12 hours. You will note this in the graph below over the past month. The memory usage increases exponentially until it just caps out and then CPU usage hits 100% and it then refuses all requests. If I restart the container I don't get long to interact with the container or Tdarr until it locks up. I've attached the logs to see if anyone can give me any pointers as I'm enjoying using Tdarr! I keep the container shut down to stop it exhausting all of the system resources. I'm running version 2.22.01 and have the Node running in the same container as the server. |
@HaveAGitGat I think your changes are noticable. Atleast on my end it looks like a huge improvement in memory usage. The graph shows the last 6 months with every new color showing a restart. |
@Kapujino that's interesting thanks for sharing. I've recently been working on migrating Tdarr to sqlite3 and redoing a whole lot of stuff in order to reduce memory usage (94 files changes so far, will be quite a big update under the hood). Tdarr Server core process memory usage with 1 million files queued and at idle: Tdarr Server core process memory usage with 1 million files queued and with 10 workers running Will post here when I have a decent container to try, should be soon. |
These 2 images have passed the tests and seem to be working well:
It should reduce RAM usage by over 95% for large libraries. On startup a database backup will be created, check Tools->Backups for how to restore if you need to go back to the previous version. DB migration status is shown in the logs and the UI. |
This update is now in pre-release for version You can try it using these steps across all platforms:
|
2.24.01 released. Feel free to add info here if need be. |
All Windows based
Server and node running as a service.
I am transcoding a lot of small files so it churns through them pretty quickly individually.
I am seeing the Process Mem X/X at the top (and validated with Task Manager) steadily climb. It got to 3900MB today so I paused the Node and restarted Server. It's been running for a couple hours and is at 440/507MB currently.
It does drop at times but in the long run, rises.
The text was updated successfully, but these errors were encountered: