Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extreme memory usage instantly when starting rtorrent container [weird issue] #53

Open
NoLooseEnds opened this issue Nov 21, 2022 · 15 comments

Comments

@NoLooseEnds
Copy link

NoLooseEnds commented Nov 21, 2022

I've been using your rtorrent docker image for ages now, without issue. I've recently moved houses (including my server), so I spun everything down, before spinning it up again in the new house. Everything works as expected, EXCEPT for rtorrent.

Starting rtorrent causes it to use an extreme amount of memory (even if set up from scratch using a minimal docker-compose file and your rtorrent.rc file).

See how much the memory increases when starting the container:

Screen.Recording.2022-11-21.at.10.41.03.mov

It exits with error 137 which means a container or pod is trying to use more memory than it's allowed.

The only console log output I get is:

[52874.426076] Out of memory: Killed process 550961 (rtorrent) total-um:33559656kB, anon-rss:2754180
3kB, file-rss:4kB, shmem-rss:0kB, UID: 1000 pgtables :53956kB oom_score_adj:0

docker-compose.yml:

version: '3.7'

services:
  rtorrent:
    container_name: rtorrent-test
    image: jesec/rtorrent
    command: -o system.daemon.set=true
      HOME: /config
    volumes:
      - ./conf:/config

I can't wrap my head around what can cause this. Any help is much appreciated!

Thanks!

@Elegant996
Copy link

Using a new runtime on the backend like containerd or a different version of docker? I needed to modify the runtime config to resolve it.

@NoLooseEnds
Copy link
Author

NoLooseEnds commented Nov 21, 2022

I'm running the latest docker that VMware PhotonOS repository provides:

╰─$ docker version
Client: Docker Engine - Community
 Version:           20.10.14
 API version:       1.41
 Go version:        go1.19.3
 Git commit:        a224086
 Built:             Mon Nov 14 19:20:50 2022
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.14
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.19.3
  Git commit:       87a90dc
  Built:            Mon Nov 14 19:22:01 2022
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.6
  GitCommit:        10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1
 runc:
  Version:          1.1.4
  GitCommit:
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
╰─$ containerd --version
containerd github.com/containerd/containerd 1.6.6 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae

I did update to the latest packages before I spun down the system, but I did not think that would influence the docker images?

How did you go about changing the runtime config?

(I did have one other mysql image spike with memory, and I resolved that by using the latest mysql image)

@Elegant996
Copy link

If you're using containerd, check /etc/containerd/config.toml. If I recall, the issue was with the oom_score not being defined. For the runtime to not evict, it should be oom_score = -999. Docker should be responsible for evicting instead of the runtime so this should prevent that (at least, this was the issue for me).

@NoLooseEnds
Copy link
Author

Thanks. I tried changing it to oom_score = -999 and then issued systemctl restart containerd to reload the config, but it did not change the behaviour.

@kannibalox
Copy link
Contributor

Changing OOM killer settings won't help with the root problem of the process using too much memory.

One thing I noticed is that the tag isn't pinned in the compose file, so you may want to double check that you're on the latest release tag (0.9.8-r16). It also might be worth giving the master tag a shot, since there have been some changes since that release.

Can you post your rtorrent.rc, and the amount of space your session directory is using? An estimate of the number of active torrents would be helpful as well.

Does the behavior still occur if you temporarily move all files out of the session directory? If so, you can try moving them back into the directory in batches (e.g. all the files starting with 0 first, then 1, 2, etc) to see if it's a problem with a specific file.

@Elegant996
Copy link

Master has a lot of issues. It does not save session for newly added torrents nor use the specified directory when adding. I filed both these issues but no dice.

@NoLooseEnds
Copy link
Author

NoLooseEnds commented Nov 22, 2022

@kannibalox That's the thing, I get the same results regardless. The one in the video is a clean one, no torrents, no sessions, default rtorrent.rc config and a minimal docker-compose.

I was thinking that that could be the issue earlier on, so I moved away from my custom one (with quite a bit of custom things and sessions) to a minimal/fresh one.

I'll see if I can pull another image.

@NoLooseEnds
Copy link
Author

Ok, so if I jesec/rtorrent:master-amd64 it works as it should (at least with the minimal config). I'll give that a go on my main setup.

Still no idea what's causing the issue. Afaik not stating any tags pulls the ´latest´ tag by default. That was updated 7 months ago. ´master-amd64´ was updated 4 months ago.

@NoLooseEnds
Copy link
Author

Trying it in my real setup don't work that well. I have a few scripts, so I'm bulding a custom image via @jesec image.

Dockerfile:

#FROM jesec/rtorrent:0.9.8-r15 as base
FROM jesec/rtorrent:master-amd64 as base
FROM alpine

ENV HOME=/home/download

RUN apk add --update-cache \
    bash \
    curl \
    tzdata \
  && rm -rf /var/cache/apk/*

COPY --from=base / /

ENTRYPOINT ["rtorrent"]

For some reason it starts two rtorrent instances in the same image. If I manually kill one of the instances inside the image it seems to work as normal, with the exception of a script that don't trigger (even if I can trigger it manually).

Wonder what the breaking change is.

@Bide-UK
Copy link

Bide-UK commented Dec 10, 2022

I can confirm I'm also getting the same issue on Linux fedora 6.0.10-200.fc36.x86_64 Mac Pro 2013 64gb Ram Rtorrent will exit if I attempt to limit the amount of RAM using docker-compose.

  rtorrent:
    image: jesec/rtorrent
    user: 1000:1001
    restart: unless-stopped
    command: -o network.port_range.set=6881-6881,system.daemon.set=true
    deploy:
      resources:
        limits:
          cpus: '0.4'
          memory: 2000M

rtorrent_1 exited with code 137

Additionally setting the limits in the .rtorrent.rc to limit memory also does nothing.

cat ~/Config/.rtorrent.rc

## Import default configurations
import = /etc/rtorrent/rtorrent.rc

## Listening port
network.port_range.set=6881-6881


## UserConfig
#############################################################################
# A minimal rTorrent configuration that provides the basic features
# Memory usage limit (default: 2/5 of RAM available)
pieces.memory.max.set = 1800M
dht.mode.set = disable
protocol.pex.set = no
network.http.max_open.set = 50
network.max_open_files.set = 600
network.max_open_sockets.set = 300

@Bide-UK
Copy link

Bide-UK commented Dec 10, 2022

Pulling jesec/rtorrent:master-amd64 seemed to fixed the issue for me.

@kannibalox
Copy link
Contributor

One of the changes between the latest and master-amd64 is that the build started linking against mimalloc, which is a pretty big change for memory management (for the better).

For some reason it starts two rtorrent instances in the same image. If I manually kill one of the instances inside the image it seems to work as normal, with the exception of a script that don't trigger (even if I can trigger it manually).

Wonder what the breaking change is.

Is the memory issue at least fixed? I tried that same Dockerfile (with no config) and only saw one process.

@NoLooseEnds
Copy link
Author

NoLooseEnds commented Dec 10, 2022

Did a new test now, with default everything, and :latest still have a memory issue. The :master-amd64 sort of works, but gives me issues with CPU and running multiple rtorrent -o system.daemon.set=true processes.
image

I'm running PhotonOS, so I'm hoping an update there will solve it, especially if almost no one is having this issue. I've jumped ship to qbittorrent for now, but I would like to get back to rtorrent at some point.

@kannibalox
Copy link
Contributor

kannibalox commented Dec 10, 2022

issues with CPU and running multiple rtorrent -o system.daemon.set=true processes. image

Ah, that's htop being extra helpful and showing you the process's threads (you can toggle seeing them with H). Those are normal and required for rTorrent to function. I couldn't say what's up with the CPU usage without more information.

Only jesec can update the latest tag, so we'll have to wait for him to weigh in on that.

@NoLooseEnds
Copy link
Author

NoLooseEnds commented Dec 10, 2022

aha, ok. My bad. The CPU usage issue, not sure either. Flood is unresponsive, killing the process (one of rtorrents threads), it seemed to work for while, until it happened again and Flood became unresponsive again. It was just using 100% of one of the cores of the system.

I don't really have any idea, other than what I changed on my system was updating PhotonOS – so something there caused the memory issue (and I originally used :0.9.8-r15).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants