Memory usage in container is 2 orders of magnitude higher for slapd. #8231

Closed
xcJeroen opened this Issue Sep 25, 2014 · 28 comments

Comments

Projects
None yet
8 participants
@xcJeroen

When running slapd in a docker container the memory usages increases with 700M (give or take, seen in htop). When the same installation is run on a vm it will only consume around 2M. Is there a way to find out what is causing this huge difference in memory consumption?

Even if the memory reported is not entirely accurate, I cannot run two containers with slapd on a vm that has 1G of ram. The second container just stops and errors out because it can not allocate the required memory.

$ sudo docker run -d ldap
a41c2fd...
$ sudo docker run -it --rm ldap slapd -d 4; echo $?
...
54243799 ch_calloc of 1048576 elems of 704 bytes failed
slapd: ch_malloc.c:107: ch_calloc: Assertion `0' failed.
255

Running just the container sudo docker run -it --rm ldap /bin/bash uses barely any memory (as expected). Someone else seemed to have a similar issue (#6861), but I'm not sure.

To reproduce, dockerfile and install script for archlinux: https://gist.github.com/xcJeroen/3e2b2f2eebdfcb85cef0

In case it's important the host os for the docker container is an archlinux as well.

$ uname -a
Linux sarchy 3.16.3-1-ARCH #1 SMP PREEMPT Wed Sep 17 21:54:13 CEST 2014 x86_64 GNU/Linux

I also tried to run a couple of other openldap containers that I could find on the docker registry, and they all seem to have the same problem.

@SvenDowideit

This comment has been minimized.

Show comment
Hide comment
@SvenDowideit

SvenDowideit Sep 26, 2014

Contributor

can you please add the output of docker info and docker version

Contributor

SvenDowideit commented Sep 26, 2014

can you please add the output of docker info and docker version

@xcJeroen

This comment has been minimized.

Show comment
Hide comment
@xcJeroen

xcJeroen Sep 26, 2014

I have tried this first on version 1.1.2 and afterwards upgraded to 1.2.0. But the results were the same.

$ sudo docker info
Containers: 5
Images: 72
Storage Driver: devicemapper
 Pool Name: docker-8:1-262181-pool
 Data file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata file: /var/lib/docker/devicemapper/devicemapper/metadata
 Data Space Used: 2646.2 Mb
 Data Space Total: 102400.0 Mb
 Metadata Space Used: 3.8 Mb
 Metadata Space Total: 2048.0 Mb
Execution Driver: native-0.2
Kernel Version: 3.16.1-1-ARCH
WARNING: No swap limit support
$ sudo docker version
Client version: 1.1.2
Client API version: 1.13
Go version (client): go1.3
Git commit (client): d84a070
Server version: 1.1.2
Server API version: 1.13
Go version (server): go1.3
Git commit (server): d84a070

and

$ sudo docker info
Containers: 4
Images: 30
Storage Driver: devicemapper
 Pool Name: docker-8:5-1014-pool
 Pool Blocksize: 64 Kb
 Data file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata file: /var/lib/docker/devicemapper/devicemapper/metadata
 Data Space Used: 2430.9 Mb
 Data Space Total: 102400.0 Mb
 Metadata Space Used: 2.6 Mb
 Metadata Space Total: 2048.0 Mb
Execution Driver: native-0.2
Kernel Version: 3.16.3-1-ARCH
Operating System: Arch Linux
WARNING: No swap limit support
$ sudo docker version
Client version: 1.2.0
Client API version: 1.14
Go version (client): go1.3.1
Git commit (client): fa7b24f
OS/Arch (client): linux/amd64
Server version: 1.2.0
Server API version: 1.14
Go version (server): go1.3.1
Git commit (server): fa7b24f

I have tried this first on version 1.1.2 and afterwards upgraded to 1.2.0. But the results were the same.

$ sudo docker info
Containers: 5
Images: 72
Storage Driver: devicemapper
 Pool Name: docker-8:1-262181-pool
 Data file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata file: /var/lib/docker/devicemapper/devicemapper/metadata
 Data Space Used: 2646.2 Mb
 Data Space Total: 102400.0 Mb
 Metadata Space Used: 3.8 Mb
 Metadata Space Total: 2048.0 Mb
Execution Driver: native-0.2
Kernel Version: 3.16.1-1-ARCH
WARNING: No swap limit support
$ sudo docker version
Client version: 1.1.2
Client API version: 1.13
Go version (client): go1.3
Git commit (client): d84a070
Server version: 1.1.2
Server API version: 1.13
Go version (server): go1.3
Git commit (server): d84a070

and

$ sudo docker info
Containers: 4
Images: 30
Storage Driver: devicemapper
 Pool Name: docker-8:5-1014-pool
 Pool Blocksize: 64 Kb
 Data file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata file: /var/lib/docker/devicemapper/devicemapper/metadata
 Data Space Used: 2430.9 Mb
 Data Space Total: 102400.0 Mb
 Metadata Space Used: 2.6 Mb
 Metadata Space Total: 2048.0 Mb
Execution Driver: native-0.2
Kernel Version: 3.16.3-1-ARCH
Operating System: Arch Linux
WARNING: No swap limit support
$ sudo docker version
Client version: 1.2.0
Client API version: 1.14
Go version (client): go1.3.1
Git commit (client): fa7b24f
OS/Arch (client): linux/amd64
Server version: 1.2.0
Server API version: 1.14
Go version (server): go1.3.1
Git commit (server): fa7b24f
@unclejack

This comment has been minimized.

Show comment
Hide comment
@unclejack

unclejack Sep 26, 2014

Contributor

I can't reproduce this problem at all. Are you sure you've got enough memory to run those containers?
slapd was only using around 90 MB of RAM and I don't think this has anything to do with Docker itself.

Please make sure your system has enough free RAM memory. If slapd is trying to allocate the same amount of RAM memory as it did for me and there wasn't enough free memory, I wouldn't be surprised to hear that it's failing.

Contributor

unclejack commented Sep 26, 2014

I can't reproduce this problem at all. Are you sure you've got enough memory to run those containers?
slapd was only using around 90 MB of RAM and I don't think this has anything to do with Docker itself.

Please make sure your system has enough free RAM memory. If slapd is trying to allocate the same amount of RAM memory as it did for me and there wasn't enough free memory, I wouldn't be surprised to hear that it's failing.

@xcJeroen

This comment has been minimized.

Show comment
Hide comment
@xcJeroen

xcJeroen Sep 26, 2014

My machine has 1G of ram, it uses about 70 when It boots and shoots up to somewhere around 800M when I run the docker container. I can run multiple docker containers on there just fine, but not one created with that docker file. I also had some java applications that seemed to use much more ram than they should. Perhaps that's related.

I guess I could create and tar up a vm that has this problem if that would help. I'm not sure if that would be useful to you.

My machine has 1G of ram, it uses about 70 when It boots and shoots up to somewhere around 800M when I run the docker container. I can run multiple docker containers on there just fine, but not one created with that docker file. I also had some java applications that seemed to use much more ram than they should. Perhaps that's related.

I guess I could create and tar up a vm that has this problem if that would help. I'm not sure if that would be useful to you.

@unclejack

This comment has been minimized.

Show comment
Hide comment
@unclejack

unclejack Oct 1, 2014

Contributor

@xcJeroen Would you mind providing the exact amount of RAM memory of the process you're saying is using twice the memory in a Docker container? 80 MB of RAM doesn't seem like a lot.

Contributor

unclejack commented Oct 1, 2014

@xcJeroen Would you mind providing the exact amount of RAM memory of the process you're saying is using twice the memory in a Docker container? 80 MB of RAM doesn't seem like a lot.

@xcJeroen

This comment has been minimized.

Show comment
Hide comment
@xcJeroen

xcJeroen Oct 1, 2014

@unclejack The host os (the vm) has 1G of memory, the docker container when running the slapd daemon uses 700M (making the total memory used on the vm 800M, not 80M), the slapd daemon on the vm directly only takes 4M.

If it would use 80M than that would still be 20 times the memory consumption, but at least I would be able to run multiple instances for testing and development environments. But that is not the case, it uses about ten times what is already 20 times more that normal memory usage. Which means I need at least 1.5G of memory to run services that should be sort of an afterthought.

xcJeroen commented Oct 1, 2014

@unclejack The host os (the vm) has 1G of memory, the docker container when running the slapd daemon uses 700M (making the total memory used on the vm 800M, not 80M), the slapd daemon on the vm directly only takes 4M.

If it would use 80M than that would still be 20 times the memory consumption, but at least I would be able to run multiple instances for testing and development environments. But that is not the case, it uses about ten times what is already 20 times more that normal memory usage. Which means I need at least 1.5G of memory to run services that should be sort of an afterthought.

@taylormadeapps

This comment has been minimized.

Show comment
Hide comment
@taylormadeapps

taylormadeapps Oct 14, 2014

I've got the same issue on my docker build encircle/enactdsa-deb
slapd uses 700M of memory in a docker ubuntu container.
My docker host is a vm on the cloud (kvm i think)

I've got the same issue on my docker build encircle/enactdsa-deb
slapd uses 700M of memory in a docker ubuntu container.
My docker host is a vm on the cloud (kvm i think)

@taylormadeapps

This comment has been minimized.

Show comment
Hide comment
@taylormadeapps

taylormadeapps Oct 15, 2014

is happening on any slap images from docker hub e.g. encircle/enactdsa-deb
Tried with the lxc driver as well as native with no difference

Think it might be something to do with these cloud servers not using any swap space.
Rackspace and Memset both don't use swap space.

Docker host os's tried: ubuntu 14.04 TLS & debian wheezy.

Cloud servers: rackspace and memset.

Docker info output:
Containers: 2
Images: 187
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Dirs: 191
Execution Driver: lxc-0.8.0-rc1
Kernel Version: 3.12.26-x1
Operating System: Debian GNU/Linux 7 (wheezy)
Username: ------
Registry: [https://index.docker.io/v1/]
WARNING: No swap limit support
root@encirab4:~#

Docker version output:
Client version: 1.2.0
Client API version: 1.14
Go version (client): go1.3.1
Git commit (client): fa7b24f
OS/Arch (client): linux/amd64
Server version: 1.2.0
Server API version: 1.14
Go version (server): go1.3.1
Git commit (server): fa7b24f

is happening on any slap images from docker hub e.g. encircle/enactdsa-deb
Tried with the lxc driver as well as native with no difference

Think it might be something to do with these cloud servers not using any swap space.
Rackspace and Memset both don't use swap space.

Docker host os's tried: ubuntu 14.04 TLS & debian wheezy.

Cloud servers: rackspace and memset.

Docker info output:
Containers: 2
Images: 187
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Dirs: 191
Execution Driver: lxc-0.8.0-rc1
Kernel Version: 3.12.26-x1
Operating System: Debian GNU/Linux 7 (wheezy)
Username: ------
Registry: [https://index.docker.io/v1/]
WARNING: No swap limit support
root@encirab4:~#

Docker version output:
Client version: 1.2.0
Client API version: 1.14
Go version (client): go1.3.1
Git commit (client): fa7b24f
OS/Arch (client): linux/amd64
Server version: 1.2.0
Server API version: 1.14
Go version (server): go1.3.1
Git commit (server): fa7b24f

@taylormadeapps

This comment has been minimized.

Show comment
Hide comment

bump

@nicot

This comment has been minimized.

Show comment
Hide comment
@nicot

nicot Nov 15, 2014

I am encountering this issue on a non-virtualized computer with 8 GiB of memory.

Memory usage in docker: 740584
Memory usage outside of docker: 4712

About 200x more memory in Docker than outside of it.

Output of docker info:

Containers: 88
Images: 167
Storage Driver: btrfs
Execution Driver: native-0.2
Kernel Version: 3.17.2-1-ARCH
Operating System: Arch Linux
WARNING: No swap limit support

Output of docker version:

Client version: 1.3.1
Client API version: 1.15
Go version (client): go1.3.3
Git commit (client): 4e9bbfa
OS/Arch (client): linux/amd64
Server version: 1.3.1
Server API version: 1.15
Go version (server): go1.3.3
Git commit (server): 4e9bbfa

Steps to reproduce in Docker:

docker run -v /data/ldap:/var/lib/ldap \                                                                                                                                ~
           -e LDAP_DOMAIN=mycorp.com \
           -e LDAP_ORGANISATION="My Mega Corporation" \
           -e LDAP_ROOTPASS=s3cr3tpassw0rd \
           -d nickstenning/slapd
ps -e -orss=,args= | grep slapd

Steps to reproduce outside Docker:

sudo apt-get install ldap
ps -e -orss=,args= | grep slapd

Nick's ldap image is minimal, and changing to a completely bare one does not change memory usage.

nicot commented Nov 15, 2014

I am encountering this issue on a non-virtualized computer with 8 GiB of memory.

Memory usage in docker: 740584
Memory usage outside of docker: 4712

About 200x more memory in Docker than outside of it.

Output of docker info:

Containers: 88
Images: 167
Storage Driver: btrfs
Execution Driver: native-0.2
Kernel Version: 3.17.2-1-ARCH
Operating System: Arch Linux
WARNING: No swap limit support

Output of docker version:

Client version: 1.3.1
Client API version: 1.15
Go version (client): go1.3.3
Git commit (client): 4e9bbfa
OS/Arch (client): linux/amd64
Server version: 1.3.1
Server API version: 1.15
Go version (server): go1.3.3
Git commit (server): 4e9bbfa

Steps to reproduce in Docker:

docker run -v /data/ldap:/var/lib/ldap \                                                                                                                                ~
           -e LDAP_DOMAIN=mycorp.com \
           -e LDAP_ORGANISATION="My Mega Corporation" \
           -e LDAP_ROOTPASS=s3cr3tpassw0rd \
           -d nickstenning/slapd
ps -e -orss=,args= | grep slapd

Steps to reproduce outside Docker:

sudo apt-get install ldap
ps -e -orss=,args= | grep slapd

Nick's ldap image is minimal, and changing to a completely bare one does not change memory usage.

@ghost

This comment has been minimized.

Show comment
Hide comment
@ghost

ghost Nov 18, 2014

I've tried different versions of OpenLDAP including the latest with different distributions and even compiled the latest source, the results are consistent. slapd uses around 700 MB.

$ docker version
Client version: 1.3.1
Client API version: 1.15
Go version (client): go1.3.2
Git commit (client): 4e9bbfa
OS/Arch (client): linux/amd64
Server version: 1.3.1
Server API version: 1.15
Go version (server): go1.3.2
Git commit (server): 4e9bbfa

$ docker info
Containers: 4
Images: 103
Storage Driver: aufs
 Root Dir: /data/docker/aufs
 Dirs: 111
Execution Driver: native-0.2
Kernel Version: 3.16.0-4-amd64
Operating System: Debian GNU/Linux jessie/sid
WARNING: No memory limit support
WARNING: No swap limit support

My Dockerfile: https://gist.github.com/grokavi/8d6bf2a4840cceb7d07f

ghost commented Nov 18, 2014

I've tried different versions of OpenLDAP including the latest with different distributions and even compiled the latest source, the results are consistent. slapd uses around 700 MB.

$ docker version
Client version: 1.3.1
Client API version: 1.15
Go version (client): go1.3.2
Git commit (client): 4e9bbfa
OS/Arch (client): linux/amd64
Server version: 1.3.1
Server API version: 1.15
Go version (server): go1.3.2
Git commit (server): 4e9bbfa

$ docker info
Containers: 4
Images: 103
Storage Driver: aufs
 Root Dir: /data/docker/aufs
 Dirs: 111
Execution Driver: native-0.2
Kernel Version: 3.16.0-4-amd64
Operating System: Debian GNU/Linux jessie/sid
WARNING: No memory limit support
WARNING: No swap limit support

My Dockerfile: https://gist.github.com/grokavi/8d6bf2a4840cceb7d07f

@nicot

This comment has been minimized.

Show comment
Hide comment
@nicot

nicot Nov 18, 2014

Have you tried statically compiling it and moving the binary over to the scratch image?

nicot commented Nov 18, 2014

Have you tried statically compiling it and moving the binary over to the scratch image?

@ghost

This comment has been minimized.

Show comment
Hide comment
@ghost

ghost Nov 19, 2014

@nicot, I just did.

$ make clean; LDFLAGS="-static" ./configure && \
                     make depend LDFLAGS="-all-static" && make LDFLAGS="-all-static"
...
$ ldd ./servers/slapd/slapd     
        not a dynamic executable

No luck, still uses ~700MB.

$ ps -e -orss=,args= | grep slapd
731804 ./servers/slapd/slapd -d 1

ghost commented Nov 19, 2014

@nicot, I just did.

$ make clean; LDFLAGS="-static" ./configure && \
                     make depend LDFLAGS="-all-static" && make LDFLAGS="-all-static"
...
$ ldd ./servers/slapd/slapd     
        not a dynamic executable

No luck, still uses ~700MB.

$ ps -e -orss=,args= | grep slapd
731804 ./servers/slapd/slapd -d 1
@ghost

This comment has been minimized.

Show comment
Hide comment
@ghost

ghost Nov 19, 2014

I tested with different versions of Docker on a virtual machine with 2GB of RAM on Ubuntu 12.04 with kernel version 3.16.5-x86_64. Clearly something happened in 1.0 of Docker. The statistics for 0.9.1 look fine.

All tests were carried out on debian:latest with manual installation of slapd and procps packages.

lxc-docker-0.9.1
root@56984c70ba05:/# ps -eo rss,vsz,size,args 
  RSS    VSZ  SIZE COMMAND
 2940  17852   468 /bin/bash
 8624 102892 20904 /usr/sbin/slapd -h ldap:/// ldapi:/// -g openldap -u openldap -F /etc/ldap/slapd.d
lxc-docker-1.0.0
root@cf62afd7d71a:/# ps -eo rss,vsz,size,args    
  RSS    VSZ  SIZE COMMAND
 2952  17852   468 /bin/bash
370632 491260 409272 /usr/sbin/slapd -h ldap:/// ldapi:/// -g openldap -u openldap -F /etc/ldap/slapd.d
lxc-docker-1.3.1
root@1515a9987ec0:/# ps -eo rss,vsz,size,args
  RSS    VSZ  SIZE COMMAND
 2976  17852   468 /bin/bash
370412 491260 409272 /usr/sbin/slapd -h ldap:/// ldapi:/// -g openldap -u openldap -F /etc/ldap/slapd.d

ghost commented Nov 19, 2014

I tested with different versions of Docker on a virtual machine with 2GB of RAM on Ubuntu 12.04 with kernel version 3.16.5-x86_64. Clearly something happened in 1.0 of Docker. The statistics for 0.9.1 look fine.

All tests were carried out on debian:latest with manual installation of slapd and procps packages.

lxc-docker-0.9.1
root@56984c70ba05:/# ps -eo rss,vsz,size,args 
  RSS    VSZ  SIZE COMMAND
 2940  17852   468 /bin/bash
 8624 102892 20904 /usr/sbin/slapd -h ldap:/// ldapi:/// -g openldap -u openldap -F /etc/ldap/slapd.d
lxc-docker-1.0.0
root@cf62afd7d71a:/# ps -eo rss,vsz,size,args    
  RSS    VSZ  SIZE COMMAND
 2952  17852   468 /bin/bash
370632 491260 409272 /usr/sbin/slapd -h ldap:/// ldapi:/// -g openldap -u openldap -F /etc/ldap/slapd.d
lxc-docker-1.3.1
root@1515a9987ec0:/# ps -eo rss,vsz,size,args
  RSS    VSZ  SIZE COMMAND
 2976  17852   468 /bin/bash
370412 491260 409272 /usr/sbin/slapd -h ldap:/// ldapi:/// -g openldap -u openldap -F /etc/ldap/slapd.d
@ghost

This comment has been minimized.

Show comment
Hide comment
@ghost

ghost Nov 19, 2014

Sorry, didn't realise that there were other versions between 0.9.1 and 1.0. Looks like something happened in 0.10.

lxc-docker-0.10.0
root@13e9c873631b:/# ps -eo rss,vsz,size,args
  RSS    VSZ  SIZE COMMAND
 2952  17852   468 /bin/bash
370456 491260 409272 /usr/sbin/slapd -h ldap:/// ldapi:/// -g openldap -u openldap -F /etc/ldap/slapd.d

ghost commented Nov 19, 2014

Sorry, didn't realise that there were other versions between 0.9.1 and 1.0. Looks like something happened in 0.10.

lxc-docker-0.10.0
root@13e9c873631b:/# ps -eo rss,vsz,size,args
  RSS    VSZ  SIZE COMMAND
 2952  17852   468 /bin/bash
370456 491260 409272 /usr/sbin/slapd -h ldap:/// ldapi:/// -g openldap -u openldap -F /etc/ldap/slapd.d
@ghost

This comment has been minimized.

Show comment
Hide comment
@ghost

ghost Nov 20, 2014

Any updates? Please let me know if you need anything more. This issue is quite serious and needs to be fixed urgently.

ghost commented Nov 20, 2014

Any updates? Please let me know if you need anything more. This issue is quite serious and needs to be fixed urgently.

@aidanhs

This comment has been minimized.

Show comment
Hide comment
@aidanhs

aidanhs Nov 20, 2014

Contributor

Try stracing the static binary in 0.9.1 and 0.10.0 and putting the logs in a gist and posting them here. Docker may be misreporting memory, or something similar.

Contributor

aidanhs commented Nov 20, 2014

Try stracing the static binary in 0.9.1 and 0.10.0 and putting the logs in a gist and posting them here. Docker may be misreporting memory, or something similar.

@ghost

This comment has been minimized.

Show comment
Hide comment
@ghost

ghost Nov 20, 2014

@aidanhs Here is the gist of strace output from slapd with Docker versions 0.10.0 and 0.9.1 respectively.

https://gist.github.com/grokavi/736870f58be71462ff8b

If I limit the memory for this container, say about 200MB, then the OOM killer kills slapd. So probably it is not a case of Docker mispreporting memory usage.

Also there is heavy swapping on the server when I run a few more instances of this image.

ghost commented Nov 20, 2014

@aidanhs Here is the gist of strace output from slapd with Docker versions 0.10.0 and 0.9.1 respectively.

https://gist.github.com/grokavi/736870f58be71462ff8b

If I limit the memory for this container, say about 200MB, then the OOM killer kills slapd. So probably it is not a case of Docker mispreporting memory usage.

Also there is heavy swapping on the server when I run a few more instances of this image.

@ghost

This comment has been minimized.

Show comment
Hide comment
@ghost

ghost Nov 20, 2014

I can't make much of the output but here is the call to mmap that is not present in 0.9.1.

write(9, "/usr/sbin/slapd -h ldapi:/// -g "..., 80) = 80
close(9)                                = 0
munmap(0x7fa5f6ff9000, 4096)            = 0
mmap(NULL, 369102848, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fa5da711000
brk(0x7fa5f76ac000)                     = 0x7fa5f76ac000
stat("/var/lib/ldap", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0

ghost commented Nov 20, 2014

I can't make much of the output but here is the call to mmap that is not present in 0.9.1.

write(9, "/usr/sbin/slapd -h ldapi:/// -g "..., 80) = 80
close(9)                                = 0
munmap(0x7fa5f6ff9000, 4096)            = 0
mmap(NULL, 369102848, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fa5da711000
brk(0x7fa5f76ac000)                     = 0x7fa5f76ac000
stat("/var/lib/ldap", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
@aidanhs

This comment has been minimized.

Show comment
Hide comment
@aidanhs

aidanhs Nov 20, 2014

Contributor

The OOM killer is presumably because docker is correctly setting limits on the namespace and the kernel is behaving as expected (my knowledge on namespaces is limited).
That doesn't mean that everything is being reported correctly to the child process (though it would be odd...).


Regarding the straces, I suggest you use meld with sed -i 's/0x[0-9a-z]*/0x001/g' fname. Some commentary below:

getrlimit(RLIMIT_NOFILE, {rlim_cur=1024, rlim_max=4*1024}) = 0
getrlimit(RLIMIT_NOFILE, {rlim_cur=512*1024, rlim_max=1024*1024}) = 0

i.e. docker 0.10.0 lets the process know it can open a huge number more fds than in 0.9.1. A few lines after after, we see an mmap (not the one you've pointed out) in docker 0.10.0 - mmap(NULL, 29364224, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x001

Then (in 0.10.0) there's an epoll_create with an absolutely huge number passed as an arg:

epoll_create(1024)                      = 5
epoll_create(524288)                    = 6

That arg is apparently ignored (according to the manpages), but still a bit odd...

Much later we see the mmap you've identified (~350MB!) which happens instead of a brk in 0.9.0.

I'm pretty suspicious of the getrlimit, though I don't see why max number of file descriptors would influence the amount of memory the application will consume.

Contributor

aidanhs commented Nov 20, 2014

The OOM killer is presumably because docker is correctly setting limits on the namespace and the kernel is behaving as expected (my knowledge on namespaces is limited).
That doesn't mean that everything is being reported correctly to the child process (though it would be odd...).


Regarding the straces, I suggest you use meld with sed -i 's/0x[0-9a-z]*/0x001/g' fname. Some commentary below:

getrlimit(RLIMIT_NOFILE, {rlim_cur=1024, rlim_max=4*1024}) = 0
getrlimit(RLIMIT_NOFILE, {rlim_cur=512*1024, rlim_max=1024*1024}) = 0

i.e. docker 0.10.0 lets the process know it can open a huge number more fds than in 0.9.1. A few lines after after, we see an mmap (not the one you've pointed out) in docker 0.10.0 - mmap(NULL, 29364224, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x001

Then (in 0.10.0) there's an epoll_create with an absolutely huge number passed as an arg:

epoll_create(1024)                      = 5
epoll_create(524288)                    = 6

That arg is apparently ignored (according to the manpages), but still a bit odd...

Much later we see the mmap you've identified (~350MB!) which happens instead of a brk in 0.9.0.

I'm pretty suspicious of the getrlimit, though I don't see why max number of file descriptors would influence the amount of memory the application will consume.

@ghost

This comment has been minimized.

Show comment
Hide comment
@ghost

ghost Nov 20, 2014

@aidanhs: Thanks for you reply. By setting ulimit -n 1024 I was able to get the same memory usage as v0.9.1.

root@d288e2f8f5b2:/# ps -eo rss,args
  RSS COMMAND
 2976 /bin/bash
 8356 /usr/sbin/slapd -h ldap:/// ldapi:/// -g openldap -u openldap -F /etc/ldap/slapd.d

ghost commented Nov 20, 2014

@aidanhs: Thanks for you reply. By setting ulimit -n 1024 I was able to get the same memory usage as v0.9.1.

root@d288e2f8f5b2:/# ps -eo rss,args
  RSS COMMAND
 2976 /bin/bash
 8356 /usr/sbin/slapd -h ldap:/// ldapi:/// -g openldap -u openldap -F /etc/ldap/slapd.d
@aidanhs

This comment has been minimized.

Show comment
Hide comment
@aidanhs

aidanhs Nov 20, 2014

Contributor

Nice to know my suspicion was correct.

I looked into the slapd code. The mmap is happening immediately after epoll_create, which I believe I've narrowed down to this in servers/slapd/daemon.c:

# define SLAP_SOCK_INIT(t)              do { \
        int j; \
        slap_daemon[t].sd_epolls = ch_calloc(1, \
                ( sizeof(struct epoll_event) * 2 \
                        + sizeof(int) ) * dtblsize * 2); \
        slap_daemon[t].sd_index = (int *)&slap_daemon[t].sd_epolls[ 2 * dtblsize ]; \
        slap_daemon[t].sd_epfd = epoll_create( dtblsize / slapd_daemon_threads ); \
        for ( j = 0; j < dtblsize; j++ ) slap_daemon[t].sd_index[j] = -1; \
} while (0)

So dtblsize (which influences the arg to epoll_create) is massive:

#ifdef HAVE_SYSCONF
        dtblsize = sysconf( _SC_OPEN_MAX );
#elif defined(HAVE_GETDTABLESIZE)
        dtblsize = getdtablesize();
#else /* ! HAVE_SYSCONF && ! HAVE_GETDTABLESIZE */
        dtblsize = FD_SETSIZE;
#endif /* ! HAVE_SYSCONF && ! HAVE_GETDTABLESIZE */

So was the uncapping of open files deliberately changed between 0.9.1 and 0.10.0? Anyway, I'm done with looking into this.

Contributor

aidanhs commented Nov 20, 2014

Nice to know my suspicion was correct.

I looked into the slapd code. The mmap is happening immediately after epoll_create, which I believe I've narrowed down to this in servers/slapd/daemon.c:

# define SLAP_SOCK_INIT(t)              do { \
        int j; \
        slap_daemon[t].sd_epolls = ch_calloc(1, \
                ( sizeof(struct epoll_event) * 2 \
                        + sizeof(int) ) * dtblsize * 2); \
        slap_daemon[t].sd_index = (int *)&slap_daemon[t].sd_epolls[ 2 * dtblsize ]; \
        slap_daemon[t].sd_epfd = epoll_create( dtblsize / slapd_daemon_threads ); \
        for ( j = 0; j < dtblsize; j++ ) slap_daemon[t].sd_index[j] = -1; \
} while (0)

So dtblsize (which influences the arg to epoll_create) is massive:

#ifdef HAVE_SYSCONF
        dtblsize = sysconf( _SC_OPEN_MAX );
#elif defined(HAVE_GETDTABLESIZE)
        dtblsize = getdtablesize();
#else /* ! HAVE_SYSCONF && ! HAVE_GETDTABLESIZE */
        dtblsize = FD_SETSIZE;
#endif /* ! HAVE_SYSCONF && ! HAVE_GETDTABLESIZE */

So was the uncapping of open files deliberately changed between 0.9.1 and 0.10.0? Anyway, I'm done with looking into this.

@unclejack

This comment has been minimized.

Show comment
Hide comment
@unclejack

unclejack Nov 20, 2014

Contributor

@aidanhs The ulimit values were changed in the init scripts to allow more containers to be run. This particular problem seems like it's a slapd bug.

Contributor

unclejack commented Nov 20, 2014

@aidanhs The ulimit values were changed in the init scripts to allow more containers to be run. This particular problem seems like it's a slapd bug.

@aidanhs

This comment has been minimized.

Show comment
Hide comment
@aidanhs

aidanhs Nov 20, 2014

Contributor

I'm not sure I agree.

root@laptop:~# ulimit -a | grep open
open files                      (-n) 1024
root@laptop:~# docker run -i -t --rm ubuntu:14.04 bash
root@b2439c12a4d6:/# ulimit -a | grep open
open files                      (-n) 524288

It's not unreasonable for people to assume that inside a container bears some resemblance to outside a container. Can't you drop the value to a sane default (i.e. 1024) when spawning a process in a container?

Contributor

aidanhs commented Nov 20, 2014

I'm not sure I agree.

root@laptop:~# ulimit -a | grep open
open files                      (-n) 1024
root@laptop:~# docker run -i -t --rm ubuntu:14.04 bash
root@b2439c12a4d6:/# ulimit -a | grep open
open files                      (-n) 524288

It's not unreasonable for people to assume that inside a container bears some resemblance to outside a container. Can't you drop the value to a sane default (i.e. 1024) when spawning a process in a container?

@taylormadeapps

This comment has been minimized.

Show comment
Hide comment
@taylormadeapps

taylormadeapps Dec 2, 2014

Glad to report that if I put ulimit -n 1024 in my container startup scripts just before I launch slapd then the problem is resolved.

Glad to report that if I put ulimit -n 1024 in my container startup scripts just before I launch slapd then the problem is resolved.

@jessfraz

This comment has been minimized.

Show comment
Hide comment
@jessfraz

jessfraz Feb 26, 2015

Contributor

looks like this was resolved, ping me here if otherwise

Contributor

jessfraz commented Feb 26, 2015

looks like this was resolved, ping me here if otherwise

@jessfraz jessfraz closed this Feb 26, 2015

@aidanhs

This comment has been minimized.

Show comment
Hide comment
@aidanhs

aidanhs Feb 26, 2015

Contributor

Not fixed, but is superseded by #4717 - open files (-n) 524288.

Contributor

aidanhs commented Feb 26, 2015

Not fixed, but is superseded by #4717 - open files (-n) 524288.

@cema-sp cema-sp referenced this issue in cema-sp/iredmail-docker Apr 14, 2015

Closed

slapd failing #3

nblumoe added a commit to nblumoe/iredmail-docker that referenced this issue Apr 14, 2015

@nblumoe nblumoe referenced this issue in cema-sp/iredmail-docker Apr 14, 2015

Closed

Fix slapd memory consumption #5

@cknitt cknitt referenced this issue in osixia/docker-openldap May 23, 2015

Merged

Limit max open file descriptors to fix slapd memory usage #9

@chrisguidry

This comment has been minimized.

Show comment
Hide comment
@chrisguidry

chrisguidry Jun 3, 2015

👍 Same resolution here.

I experienced the same issue with my own slapd container (memory usage in the 500+ MB range), and resolved it by changing my Dockerfile to CMD ulimit -n 1024 && slapd -d1. Now the slapd container is at around 25MB.

This is on the hypriot/wheezy Raspberry Pi image with Docker 1.6.0.

👍 Same resolution here.

I experienced the same issue with my own slapd container (memory usage in the 500+ MB range), and resolved it by changing my Dockerfile to CMD ulimit -n 1024 && slapd -d1. Now the slapd container is at around 25MB.

This is on the hypriot/wheezy Raspberry Pi image with Docker 1.6.0.

romansaul added a commit to leanix/docker-openldap that referenced this issue Jun 16, 2015

lifei pushed a commit to lifei/docker-openldap that referenced this issue Oct 16, 2015

@jamshid jamshid referenced this issue in nickstenning/docker-slapd Apr 5, 2017

Open

Reduce high memory usage by setting "ulimit -n" #8

gcavalcante8808 added a commit to gcavalcante8808/docker-openldap that referenced this issue Aug 2, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment