Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lxc-start cannot allocate memory when free memory is available but no swap is available #2495

Closed
lazzarello opened this issue Oct 31, 2013 · 6 comments

Comments

@lazzarello
Copy link

At the moment my system has 6 GB of RAM available (free and cached) for applications. There is a running container launched from Docker consuming a fair amount of resources, about 7 GB of RAM resident. It's an GNU R process loading a bunch of data into memory. There is plenty of memory to start new containers but when I do so, I get the following error

docker run -i -t ubuntu /bin/bash

2013/10/31 17:54:15 Error: Error starting container 9ce27eb4f188: fork/exec /usr/bin/lxc-start: cannot allocate memory

From this point forward, no process on this host can start containers.

Notably, this host OS has no swap allocated. I assumed this wouldn't be necessary since there is so much RAM available. I allocated 15 GB of swap to test if this effects LXC and thus Docker. The memory allocation error disappeared.

Is there a connection between swap and container initialization?

@jpetazzo
Copy link
Contributor

What are the values of /proc/sys/vm/overcommit_memory and /proc/sys/vm/overcommit_ratio?

@lazzarello
Copy link
Author

ubuntu@ip-10-245-18-178:$ cat /proc/sys/vm/overcommit_memory
0
ubuntu@ip-10-245-18-178:
$ cat /proc/sys/vm/overcommit_ratio
50

@jpetazzo
Copy link
Contributor

Okay, that's the reason why.
Even though the R process isn't using that memory, it has kind of "reserved" it, and the system makes the following reasoning: "hey, if I start new processes, and suddenly the R process starts actually using the memory, I'll have to throw someone out!". It's exactly like surbooking.

This is explained in kernel docs (look for the documentation related to the overcommit_memory parameter).

You can address the issue by setting overcommit_memory to 1, or increating overcommit_ratio; but then, the kernel might start killing processes.

You can also allocate swap space; in that case, processes won't be killed, but if the memory usage grows too much, the system might become unresponsive.

Btw, this is not specific to Docker; it's the case on any system with processes allocating (but not using) large amounts of memory (compared to the available physical memory+swap space).

@lazzarello
Copy link
Author

Awesome, thanks for clearing this up. I figured something in the kernel was preventing containers but I didn't have the ref. Thanks again.

@xaionaro
Copy link

I have a similar problem, but "overcommit"-stuff doesn't help:

# sysctl vm.overcommit_ratio=100
vm.overcommit_ratio = 100
# sysctl vm.overcommit_memory=1
vm.overcommit_memory = 1
# lxc-start -n container
lxc-start: failed to clone(0x6c020000): Cannot allocate memory
lxc-start: Cannot allocate memory - failed to fork into a new namespace
lxc-start: failed to spawn 'container'
lxc-start: No such file or directory - failed to remove cgroup '/sys/fs/cgroup//lxc/container'

@xaionaro
Copy link

Never mind, sorry.

The problem line was (in the config file):

lxc.pts                                 = 1024

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants