Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory Leaks in 4.6.x (both libtorrent v1 & v2) -- WebUI Problem #20675

Open
dyno10 opened this issue Apr 8, 2024 · 12 comments
Open

Memory Leaks in 4.6.x (both libtorrent v1 & v2) -- WebUI Problem #20675

dyno10 opened this issue Apr 8, 2024 · 12 comments

Comments

@dyno10
Copy link

dyno10 commented Apr 8, 2024

qBittorrent & operating system versions

qBittorrent: 4.6.4 x64
Operating system: Unraid 6.12.6
Qt: 6.6.1
libtorrent-rasterbar: 1.2.19.0
Docker Container: Official qBittorrent Docker

What is the problem?

I've been having some pretty severe memory leak issues with qbittorrent-nox when running inside a docker container on Unraid. Essentially, the server's memory gradually fills up over time and will eventually crash the server due to lack of RAM. Even when this qBittorrent instance has zero actively seeding torrents, the memory usage just grows for no discernable reason. This container is used for long-term seeding, there is no download activity at all.

For instance, at present, the container has two actively seeding torrents (total speed 30KB/sec) and is using 46GiB of RAM. I've tried numerous different things to resolve this issue, but nothing works. At present, I've limited the docker container's RAM to 64GiB, which prevents the server from crashing at least. The container's oom killer should (theoretically) reap the qBittorrent process once it hits the 64GiB limit.

Troubleshooting Steps I've Tried -- no success with any of them:
-Different Docker containers (linuxserver.io, Binhex, Hotio and official qBittorrent docker)
-Different qBit versions (4.3.9, 4.4.5, 4.6.0, 4.6.2, 4.6.3 and 4.6.4)
-Different libtorrent versions (v1 and v2)
-New Server Hardware (new CPU, Ram, Motherboard, HBA)
-Various different config settings: fastresume/sql, disk cache on/off, low file pool size, small disk cache size, etc.

At this point, I'm suspecting it's either a bug in qBittorrent itself or I have some issue with some specific torrent(s). Unfortunately, I'm not sure how to go about diagnosing the issue further or potentially how to identify a troublesome torrent.


While I was typing this up, the qBittorrent-nox app (and container) crashed. This is the first time I've had a container crash, but perhaps that's just a difference in the way different Docker containers handle oom events. The container log says:

terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc


Please file a bug report at https://bug.qbittorrent.org and provide the following information:

qBittorrent version: v4.6.4

Caught signal: SIGABRT

 0# getStacktrace[abi:cxx11]() in /usr/bin/qbittorrent-nox
 1# 0x000055C472C1AEDA in /usr/bin/qbittorrent-nox
 2# 0x000014B410F87EA8 in /lib/ld-musl-x86_64.so.1

Steps to reproduce

  1. Load this specific set of 10,122 torrents.
  2. Wait until RAM entirely fills up.

Additional context

The torrents in this client were originally seeding from a different remote server (running Debian 5.10.162-1 and qBittorrent 4.3.9-lib-1.2.18.0). I downloaded all the files, .torrent and fastresume data to this Unraid and re-created that set of torrents on this local server. I have not done a full re-check on the local files.

This Unraid server currently has three active qBittorrent instances. Two are running just fine (even with libtorrent v2), but this instance is problematic for some strange reason. Instance 1 (below) refers to this problematic client instance.

qBit Instance 1 (4.6.4-lib1.2.19): 10,000 torrents -- seed size 47TB -- terrible memory leaks (problem child)
qBit Instance 2 (4.6.0-lib2.0.9): 1,200 torrents -- seed size 26TB -- no memory issues (1GB - 5GB memory usage typically)
qBit Instance 3 (4.6.4-lib2.0.10): 10,000 torrents -- seed size 63TB -- no memory issues (1GB - 5GB memory usage typically)

Server Specs:
-Supermicro H12SSL-NT
-EPYC 7B13 - 64 core at 2.20 Ghz
-256GB DDR4-3200 ECC RDIMM
-Memory Allocation:
--64GB to ZFS
--64GB to troublesome qBit container
--128GB otherwise unallocated

Since I'm currently using the official qBittorrent docker image, I do have access to gdb for debugging purposes. But I'm not super familiar with it, so I'm at a loss for how to troubleshoot further.

Log(s) & preferences file(s)

Client and Version Screenshot (this is a new container I created yesterday, hence the low stats):
qbit1

qBittorrent Client Advanced Settings:
qbit2

qBittorrent Crash Log:
qbit3_crash

qBittorrent Log File (torrent names and IP address redacted):
qbitlog_REDACTED.txt

qBittorrent Config File:
qBittorrent.conf.txt

I don't use watched folders.

@sledgehammer999
Copy link
Member

std::bad_alloc is probably due to memory exhaustion

If you pause all torrents and leave qbt running does memory increase like before?

@dyno10
Copy link
Author

dyno10 commented Apr 8, 2024

If you pause all torrents and leave qbt running does memory increase like before?

Yes, even with all torrents paused, memory still increases just like before.

@glassez
Copy link
Member

glassez commented Apr 9, 2024

At this point, I'm suspecting it's either a bug in qBittorrent itself or I have some issue with some specific torrent(s)

In any case, it seems that this is caused by certain circumstances or torrents (since other similar qBittorrent instances work for you without problems).
To determine whether this is related to specific torrent(s), it would be possible using the method of half division:

  1. Divide the set of torrents into two subsets and run qBittorrent alternately with each of them,
  2. Repeat step 1 for each problematic subset.

@dyno10
Copy link
Author

dyno10 commented Apr 9, 2024

In any case, it seems that this is caused by certain circumstances or torrents (since other similar qBittorrent instances work for you without problems). To determine whether this is related to specific torrent(s), it would be possible using the method of half division:

  1. Divide the set of torrents into two subsets and run qBittorrent alternately with each of them,
  2. Repeat step 1 for each problematic subset.

I will give that a shot and report back. I assume there is no way to determine which torrent(s) could be causing issues if I were to inspect the running processes with gdb?

@glassez
Copy link
Member

glassez commented Apr 9, 2024

I assume there is no way to determine which torrent(s) could be causing issues if I were to inspect the running processes with gdb?

I can't tell you anything specific. Perhaps if it were possible to get statistics on calling memory allocation functions, then this could shed light on what is happening.

@dyno10
Copy link
Author

dyno10 commented Apr 12, 2024

Quick update:

I paused all torrents, then restarted a few minutes later. After that, RAM usage was 14GB but it remained constant for the next 2-3 days (no crashes). After a container restart, we're back to memory leaks. The Pause/Resume trick is no longer working. I'll separate these torrents into different containers over the weekend and see if I can narrow down any problematic torrents.

@WillGunn
Copy link

WillGunn commented Apr 14, 2024

I have a very similar issue with a very similar origin, I have a set that was originally on a windows machine and migrated to a linux docker by directly changing the save paths stored in the fastresume files. 15GB after 4 hours isn't uncommon, it goes until it hits my 20GB limit I set for the container. So my best guess is something is wrong with the fastresume files, causing the memory leak.

@dyno10
Copy link
Author

dyno10 commented Apr 30, 2024

I can't tell you anything specific. Perhaps if it were possible to get statistics on calling memory allocation functions, then this could shed light on what is happening.

I have an update to this issue.

I created a new qbittorrent container (4.6.3-libtorrentv2) and migrated all .torrent files from the leaky container. I did not migrate the fastresume files. Instead, I did a full recheck of all 10k torrents (43TB or so).

As of this morning, the new container (4.6.3-libtorrentv2) is exhibiting the same memory leak issues as the previous container, despite a full re-check and creation of new fastresume files. I just re-made the new container and it's now running 4.6.4 libtorrent 1.2.19.0.

I will monitor and report back in a day or two. If this doesn't fix the leak, I will try to do some a/b testing to narrow down a specific torrent or torrents that could be causing the issue. However, throughout this re-check process, I've been seeing memory leaks in the new container (on libtorrentv2 anyway).

@dyno10
Copy link
Author

dyno10 commented May 1, 2024

After 24 hours, the new container running Qbittorrent 4.6.4 - libtorrent 1.2.19.0 has memory usage hovering around 1GB. So, it's working well now.

I still don't know why the imported fastresume files were the cause of the memory leaks, but it's pretty clear to me that was the cause of my issues.

@WillGunn
Copy link

WillGunn commented May 1, 2024 via email

@dyno10
Copy link
Author

dyno10 commented May 24, 2024

I can't tell you anything specific. Perhaps if it were possible to get statistics on calling memory allocation functions, then this could shed light on what is happening.

So, I have a (final?) update on this issue. After doing a full force recheck for all torrents, the memory leak issue started occurring again after a few days.

The problem had nothing to do with the specific torrents in the client, qbit version, libtorrent version, etc. It's an issue with the WebUI.

For whatever reason, when you have multiple qbittorrent webui browser tabs from the same "domain" open simultaneously on the same browser, this causes server-side and client-side memory leaks. Closing the webui browser tab for the problematic qbittorrent instance (qbit3 below) immediately resolved the memory leaks. I'm not sure why this happens, just that it does and it's very easy to reproduce. There was another user on the Unraid.net forums who mentioned this same issue.

The webui servers were all on the same ip addresses (192.168.0.93 and 192.168.100.93):
qbit1: 192.168.100.93:8081 -- qbit 4.6.0-libtorrentv2 <-- client-side memory leaks only
qbit2: 192.168.0.93:8082 -- qbit 4.6.4-libtorrentv2
qbit3: 192.168.0.93:8083 -- qbite 4.6.4-libtorrentv1 <-- server-side memory leaks only

Only the qbit3 instance had server-side memory leaks. And only qbit1 had client-side memory leaks. The qbit1 webui browser tab on my client machine uses 5GB of RAM, whereas a normal webui tab should use perhaps 50MB-100MB of RAM.

One possible explanation: I access qbit1 WebUI via a different IP address (different subnet). My server is reachable via two subnets, so having qbit1 and qbit2 accessed from different subnets has prevented this issue from occurring to those two instances.

Now that I've closed the webui browser tab for qbit3, the server RAM usage for that qbit instance is holding steady at 1.1GB. Previously, it would climb to 64GB before being OOM-killed.

This seems to be the same issue: #20873

@dyno10 dyno10 changed the title Memory Leaks in 4.6.x (both libtorrent v1 & v2) Memory Leaks in 4.6.x (both libtorrent v1 & v2) -- WebUI Problem May 24, 2024
@dyseg
Copy link

dyseg commented Oct 13, 2024

#20873 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants