Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot set memory limit: memory.memsw.limit_in_bytes: no such file or directory #17151

Closed
delcypher opened this issue Oct 18, 2015 · 2 comments
Closed

Comments

@delcypher
Copy link

I'm having problems setting memory limits on my containers.

If I specify both --memory and --memory-swap then I receive an error message and the container is left behind.

$ docker run --memory=500M --memory-swap=500M -ti ubuntu:latest
Error response from daemon: Cannot start container 97e39572858eaf04f48bac465ebe4a121503700b3699ef8752d9175141bd5611: [8] System error: open /sys/fs/cgroup/memory/init.scope/system.slice/docker-97e39572858eaf04f48bac465ebe4a121503700b3699ef8752d9175141bd5611.scope/memory.memsw.limit_in_bytes: no such file or directory
$ docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
97e39572858e        ubuntu:latest       "/bin/bash"         8 seconds ago       Created                                 stupefied_sammet

I took a look. Inside /sys/fs/cgroup/memory/init.scope/ there is no system.slice folder.

$ ls /sys/fs/cgroup/memory/init.scope/
cgroup.clone_children  memory.kmem.failcnt             memory.kmem.tcp.limit_in_bytes      memory.max_usage_in_bytes        memory.move_charge_at_immigrate  memory.stat            tasks
cgroup.event_control   memory.kmem.limit_in_bytes      memory.kmem.tcp.max_usage_in_bytes  memory.memsw.failcnt             memory.numa_stat                 memory.swappiness
cgroup.procs           memory.kmem.max_usage_in_bytes  memory.kmem.tcp.usage_in_bytes      memory.memsw.limit_in_bytes      memory.oom_control               memory.usage_in_bytes
memory.failcnt         memory.kmem.slabinfo            memory.kmem.usage_in_bytes          memory.memsw.max_usage_in_bytes  memory.pressure_level            memory.use_hierarchy
memory.force_empty     memory.kmem.tcp.failcnt         memory.limit_in_bytes               memory.memsw.usage_in_bytes      memory.soft_limit_in_bytes       notify_on_release

there is however a system.slice folder in /sys/fs/cgroup/memory/

If I try

 $ docker run --memory=500M -ti ubuntu:latest

then the container starts but I see this warning in Docker's logs

Oct 18 15:55:48 dan-sputnik docker[7070]: time="2015-10-18T15:55:48.989117494+01:00" level=warning msg="Your kernel does not support OOM notifications: open /sys/fs/cgroup/memory/init.scope/system.slice/docker-cd0d56a244162b406f9cad46c81950643c9b8143ddb6e3410901539bab92161c.scope/memory.oom_control: no such file or directory"

Setting these memory limits used to work about 4 months ago when I was running on older version of Docker, Linux kernel and system-d. I guessing one of these has broken something.

I'm running on Arch Linux with systemd 227-1

$ cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-linux  root=/dev/mapper/volgroup-root resume=/dev/mapper/volgroup-swap cgroup_enable=memory swapaccount=1
$ uname -a
Linux dan-sputnik 4.2.3-1-ARCH #1 SMP PREEMPT Sat Oct 3 18:52:50 CEST 2015 x86_64 GNU/Linux
$ docker info
docker info
Containers: 1
Images: 209
Storage Driver: overlay
 Backing Filesystem: extfs
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.2.3-1-ARCH
Operating System: Arch Linux (containerized)
CPUs: 4
Total Memory: 7.709 GiB
Name: dan-sputnik
ID: NXIH:L2RS:YWBH:2QG5:GAX2:YCYI:BHCI:4ECF:DQI3:JIPY:UXRN:TCWH
Username: delcypher
Registry: https://index.docker.io/v1/
$ docker version
Client:
 Version:      1.8.3
 API version:  1.20
 Go version:   go1.5.1
 Git commit:   f4bf5c7-dirty
 Built:        Wed Oct 14 11:17:02 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.8.3
 API version:  1.20
 Go version:   go1.5.1
 Git commit:   f4bf5c7-dirty
 Built:        Wed Oct 14 11:17:02 UTC 2015
 OS/Arch:      linux/amd64
@runcom
Copy link
Member

runcom commented Oct 18, 2015

Dup of #16256, thanks for the report, it's probably the new systemd that's causing this :(

@runcom runcom closed this as completed Oct 18, 2015
@delcypher
Copy link
Author

@runcom Thanks for the fast reply. Downgrading to systemd 225 seems to have fixed this for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants