New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow setting of shm-size for docker service create #26714

Open
dskdas opened this Issue Sep 19, 2016 · 23 comments

Comments

Projects
None yet
@dskdas

dskdas commented Sep 19, 2016

Is there a way I can specify the --shm-size parameter for containers being created via the docker service create method.

I am using docker 1.12

@dskdas dskdas changed the title from Allow setting of shm-size for docker swarm service create to Allow setting of shm-size for docker service create Sep 19, 2016

@justincormack

This comment has been minimized.

Show comment
Hide comment
@justincormack

justincormack Sep 19, 2016

Contributor

Hi there is a general issue here #25303 for adding the missing options to swarm mode, which links to the detailed issues. There does not seem to be a PR for this yet.

Contributor

justincormack commented Sep 19, 2016

Hi there is a general issue here #25303 for adding the missing options to swarm mode, which links to the detailed issues. There does not seem to be a PR for this yet.

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Sep 19, 2016

Member

Added a link to this issue from #25303

Member

thaJeztah commented Sep 19, 2016

Added a link to this issue from #25303

@dskdas

This comment has been minimized.

Show comment
Hide comment
@dskdas

dskdas Sep 19, 2016

Thanks for the responses. That helps clarify. In the meantime, any workarounds possible?

I guess i could create an overlay network using a KV store instead and use docker run on each node for now.

dskdas commented Sep 19, 2016

Thanks for the responses. That helps clarify. In the meantime, any workarounds possible?

I guess i could create an overlay network using a KV store instead and use docker run on each node for now.

@aluzzardi

This comment has been minimized.

Show comment
Hide comment
@aluzzardi

aluzzardi Sep 20, 2016

Contributor

@dskdas Out of curiosity - what's the use case for this?

/cc @stevvooe

Contributor

aluzzardi commented Sep 20, 2016

@dskdas Out of curiosity - what's the use case for this?

/cc @stevvooe

@aluzzardi aluzzardi assigned stevvooe and unassigned stevvooe Sep 20, 2016

@dskdas

This comment has been minimized.

Show comment
Hide comment
@dskdas

dskdas Sep 28, 2016

@stevvooe We have an application that relies on shared memory for communication between modules. The data volumes require us to have more shared memory than the default a container gets.

dskdas commented Sep 28, 2016

@stevvooe We have an application that relies on shared memory for communication between modules. The data volumes require us to have more shared memory than the default a container gets.

@shankarkc

This comment has been minimized.

Show comment
Hide comment
@shankarkc

shankarkc Nov 23, 2016

Any ETA for this?
My used case :-
I am creating selenium grid in swarm mode. I use docker service to scale up and down.
ex :- docker service create --replicas 20 --name SeleniumNode --limit-memory 2048M --restart-condition any --restart-delay 5s --stop-grace-period 10s --constraint engine.labels.seleniumNodeType==node --network overlayNet --env hub_name=SeleniumHub --env node_max_memory=1536 node:0.0.18

With this things work initially. After some time our containers start crashing. When we checked chrome bugs we found that low shm like 64MB will cause this. As i am using swarm mode and service I cannot pass --shm_size=1g It always take the default value. Running individual containers is not an option as i loose advantages of service. It would be nice if you guys fix this soon.

shankarkc commented Nov 23, 2016

Any ETA for this?
My used case :-
I am creating selenium grid in swarm mode. I use docker service to scale up and down.
ex :- docker service create --replicas 20 --name SeleniumNode --limit-memory 2048M --restart-condition any --restart-delay 5s --stop-grace-period 10s --constraint engine.labels.seleniumNodeType==node --network overlayNet --env hub_name=SeleniumHub --env node_max_memory=1536 node:0.0.18

With this things work initially. After some time our containers start crashing. When we checked chrome bugs we found that low shm like 64MB will cause this. As i am using swarm mode and service I cannot pass --shm_size=1g It always take the default value. Running individual containers is not an option as i loose advantages of service. It would be nice if you guys fix this soon.

@zoltantarcsay

This comment has been minimized.

Show comment
Hide comment
@zoltantarcsay

zoltantarcsay Dec 1, 2016

@dskdas I've just noticed that there is a --shm-size option for the docker build command which seems to work. Could that be a workaround (i.e. you can inherit the original image in a Dockerfile and build your own with this flag)?

zoltantarcsay commented Dec 1, 2016

@dskdas I've just noticed that there is a --shm-size option for the docker build command which seems to work. Could that be a workaround (i.e. you can inherit the original image in a Dockerfile and build your own with this flag)?

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Dec 1, 2016

Member

I commented on docker/swarmkit#1030 (comment), but let me copy it here; This may require docker 1.13, but you can set a custom shm-size, by adding a tmpfs mount;

--mount type=tmpfs,target=/dev/shm,.....
Member

thaJeztah commented Dec 1, 2016

I commented on docker/swarmkit#1030 (comment), but let me copy it here; This may require docker 1.13, but you can set a custom shm-size, by adding a tmpfs mount;

--mount type=tmpfs,target=/dev/shm,.....
@shankarkc

This comment has been minimized.

Show comment
Hide comment
@shankarkc

shankarkc Dec 1, 2016

shankarkc commented Dec 1, 2016

@zoltantarcsay

This comment has been minimized.

Show comment
Hide comment
@zoltantarcsay

zoltantarcsay commented Dec 1, 2016

@shankarkc you're right :(

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Dec 1, 2016

Member

--shm-size is a runtime option, and is not part of the image's configuration. When using it during docker build it is applied to the containers started for each build step.

This is by design; runtime options should not be dictated by the image that is created, but stay in control of the person running the image.

@zoltantarcsay did you try the --mount option as I mentioned? /dev/shm is just a tmpfs mount, so using --mount to add a tmpfs to the service should give you exactly the same as setting --shm-size.

Member

thaJeztah commented Dec 1, 2016

--shm-size is a runtime option, and is not part of the image's configuration. When using it during docker build it is applied to the containers started for each build step.

This is by design; runtime options should not be dictated by the image that is created, but stay in control of the person running the image.

@zoltantarcsay did you try the --mount option as I mentioned? /dev/shm is just a tmpfs mount, so using --mount to add a tmpfs to the service should give you exactly the same as setting --shm-size.

@zoltantarcsay

This comment has been minimized.

Show comment
Hide comment
@zoltantarcsay

zoltantarcsay Dec 1, 2016

@thaJeztah I installed docker-engine-1.13.0-rc2 and ran:

docker service create \
  --name tmpfstest \
  --mount type=tmpfs,dst=/dev/shm,tmpfs-size=1000000000 \
  nginx

However, regardless of the size I set, /dev/shm always ends up being 497M. Am I doing it wrong? I was following these docs: https://github.com/docker/docker/blob/master/docs/reference/commandline/service_create.md#options-for-tmpfs

zoltantarcsay commented Dec 1, 2016

@thaJeztah I installed docker-engine-1.13.0-rc2 and ran:

docker service create \
  --name tmpfstest \
  --mount type=tmpfs,dst=/dev/shm,tmpfs-size=1000000000 \
  nginx

However, regardless of the size I set, /dev/shm always ends up being 497M. Am I doing it wrong? I was following these docs: https://github.com/docker/docker/blob/master/docs/reference/commandline/service_create.md#options-for-tmpfs

@shankarkc

This comment has been minimized.

Show comment
Hide comment
@shankarkc

shankarkc Dec 2, 2016

shankarkc commented Dec 2, 2016

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Dec 2, 2016

Member

@shankarkc @zoltantarcsay we found an issue in the current RC, will be fixed in the next RC which will be released Monday, see #29070

Member

thaJeztah commented Dec 2, 2016

@shankarkc @zoltantarcsay we found an issue in the current RC, will be fixed in the next RC which will be released Monday, see #29070

@zoltantarcsay

This comment has been minimized.

Show comment
Hide comment
@zoltantarcsay

zoltantarcsay Dec 3, 2016

Awesome, much appreciated.

zoltantarcsay commented Dec 3, 2016

Awesome, much appreciated.

@davidthornton

This comment has been minimized.

Show comment
Hide comment
@davidthornton

davidthornton Jul 27, 2017

Is there any bandwidth to support this key "natively" i.e. a top-level key as in docker-compose.yml in Docker cloud?

davidthornton commented Jul 27, 2017

Is there any bandwidth to support this key "natively" i.e. a top-level key as in docker-compose.yml in Docker cloud?

@EliSnow

This comment has been minimized.

Show comment
Hide comment
@EliSnow

EliSnow Oct 20, 2017

I could be looking at it wrong but it looks like tmpfs mount workaround is no longer working.

I specified --mount type=tmpfs,dst=/dev/shm,tmpfs-size=768000000 for my service, yet df -k /dev/shm is reporting:

Filesystem     1K-blocks  Used Available Use% Mounted on
tmpfs              65536     4     65532   1% /dev/shm

This is on boot2docker, Docker 17.10.

EliSnow commented Oct 20, 2017

I could be looking at it wrong but it looks like tmpfs mount workaround is no longer working.

I specified --mount type=tmpfs,dst=/dev/shm,tmpfs-size=768000000 for my service, yet df -k /dev/shm is reporting:

Filesystem     1K-blocks  Used Available Use% Mounted on
tmpfs              65536     4     65532   1% /dev/shm

This is on boot2docker, Docker 17.10.

@EliSnow

This comment has been minimized.

Show comment
Hide comment
@EliSnow

EliSnow Oct 20, 2017

I just confirmed, it looks like there was a regression between 17.06.2 and 17.09.

I tested on different versions of boot2docker with the following service:

docker service create \
  --name tmpfstest \
  --mount type=tmpfs,dst=/dev/shm,tmpfs-size=1000000000 \
  --tty \
  debian:stretch-slim cat

It works on 17.06.2, but after that it no longer works.

Edit: Narrowing things down a bit more it's broken even with 17.09-rc1. I would check earlier versions but it doen't look like there were ce/boot2docker releases between 17.06 and 17.09.

EliSnow commented Oct 20, 2017

I just confirmed, it looks like there was a regression between 17.06.2 and 17.09.

I tested on different versions of boot2docker with the following service:

docker service create \
  --name tmpfstest \
  --mount type=tmpfs,dst=/dev/shm,tmpfs-size=1000000000 \
  --tty \
  debian:stretch-slim cat

It works on 17.06.2, but after that it no longer works.

Edit: Narrowing things down a bit more it's broken even with 17.09-rc1. I would check earlier versions but it doen't look like there were ce/boot2docker releases between 17.06 and 17.09.

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Oct 21, 2017

Member

@EliSnow can you open a separate issue for that, so that we can track it?

Member

thaJeztah commented Oct 21, 2017

@EliSnow can you open a separate issue for that, so that we can track it?

@luochen1990

This comment has been minimized.

Show comment
Hide comment
@luochen1990

luochen1990 Nov 16, 2017

@thaJeztah the --mount option works well for me with 17.06.2, but I don't know how to specify the tmpfs-size option in docker-compose file, the doc about volume-long-syntax doesn't mention to it.

luochen1990 commented Nov 16, 2017

@thaJeztah the --mount option works well for me with 17.06.2, but I don't know how to specify the tmpfs-size option in docker-compose file, the doc about volume-long-syntax doesn't mention to it.

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Nov 16, 2017

Member

@luochen1990 good question. Looking at the docker-compose schema for the 3.4 version it may not be supported yet (the long-syntax description for docker service is described in this section of the docs; looking at that, I think tmpfs-size and tmpfs-mode are not yet supported in the compose-file format.

Can you open an feature request for that in the https://github.com/docker/cli/issues issue tracker? That's where it needs to be added

Member

thaJeztah commented Nov 16, 2017

@luochen1990 good question. Looking at the docker-compose schema for the 3.4 version it may not be supported yet (the long-syntax description for docker service is described in this section of the docs; looking at that, I think tmpfs-size and tmpfs-mode are not yet supported in the compose-file format.

Can you open an feature request for that in the https://github.com/docker/cli/issues issue tracker? That's where it needs to be added

@stowns

This comment has been minimized.

Show comment
Hide comment
@stowns

stowns Nov 21, 2017

Also seeing the --mount workaround regression mentioned by @EliSnow .

docker service create --name my-service  --mount type=tmpfs,dst=/dev/shm,tmpfs-size=4294967296 my-image

Worked with 17.06 but does not with 17.09.0-ce on OSX. Anyone have any suggestions here? Trying to run the official oracle xe database image which requires a shm-size of atleast 1g

https://github.com/oracle/docker-images/tree/master/OracleDatabase

edit working around this for now by setting the default-shm-size in the daemon config.

stowns commented Nov 21, 2017

Also seeing the --mount workaround regression mentioned by @EliSnow .

docker service create --name my-service  --mount type=tmpfs,dst=/dev/shm,tmpfs-size=4294967296 my-image

Worked with 17.06 but does not with 17.09.0-ce on OSX. Anyone have any suggestions here? Trying to run the official oracle xe database image which requires a shm-size of atleast 1g

https://github.com/oracle/docker-images/tree/master/OracleDatabase

edit working around this for now by setting the default-shm-size in the daemon config.

@taylorludwig

This comment has been minimized.

Show comment
Hide comment
@taylorludwig

taylorludwig Nov 22, 2017

I got around it by just setting a bind mount to the hosts /dev/shm. This overrode the default 64mb volume.

--mount type=bind,src=/dev/shm,dst=/dev/shm
If you didn't want to share with the host or multiple containers you could create a separate src location on the host first.

taylorludwig commented Nov 22, 2017

I got around it by just setting a bind mount to the hosts /dev/shm. This overrode the default 64mb volume.

--mount type=bind,src=/dev/shm,dst=/dev/shm
If you didn't want to share with the host or multiple containers you could create a separate src location on the host first.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment