Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Posix Shared Memory across containers in a pod #28272

Open
CsatariGergely opened this issue Jun 30, 2016 · 90 comments
Open

Support Posix Shared Memory across containers in a pod #28272

CsatariGergely opened this issue Jun 30, 2016 · 90 comments
Assignees
Labels
kind/feature sig/node triage/accepted

Comments

@CsatariGergely
Copy link

CsatariGergely commented Jun 30, 2016

Docker implemented modifiable shmsize (see 1) in version 1.9. It should be possible to define the shmsize of a pod on the API and Kubernetes shall pass this information to Docker.

@Random-Liu
Copy link
Member

Random-Liu commented Jun 30, 2016

Also ref #24588 (comment), in which we also discussed whether we should expose shmsize in pod configuration.

@janosi
Copy link
Contributor

janosi commented Jun 30, 2016

I am not sure I can see that discussion in that issue about exposing ShmSize on the Kubernetes API :( As I understand, that discussion is about how to use the Docker API after it introduced the ShmSize attribute.

@Random-Liu
Copy link
Member

Random-Liu commented Jun 30, 2016

I would like kube to set an explicit default ShmSize using the option 1 proposed by @Random-Liu and I wonder if we should look to expose ShmSize as a per container option in the future.

I should say "in which we also mentioned whether we should expose shmsize in container configuration."

@janosi
Copy link
Contributor

janosi commented Jun 30, 2016

@Random-Liu All right, thank you! I missed that point.

@j3ffml j3ffml added sig/node team/ux labels Jun 30, 2016
@dims
Copy link
Member

dims commented Jul 8, 2016

@janosi @CsatariGergely - the 64m default is not enough? what would be the best way to make it configurable for your use? (pass a parameter in kubelet command line?)

@janosi
Copy link
Contributor

janosi commented Jul 11, 2016

@dims Or maybe it is too much to waste? ;)
But yes, sometimes 64m is not enough.
We would prefer a new optional attribute for the pod in PodSpec in the API,like e.g. "shmSize".
As shm is shared among containers in the pod, PodSpec would be the appropriate place, I think.

@pwittrock pwittrock removed the team/ux label Jul 18, 2016
@janosi
Copy link
Contributor

janosi commented Sep 2, 2016

We have a chance to work on this issue now. I would like to align the design before applying it on the code. Your comments are welcome!

The change on the versioned API would be, that there would be a new field in type PodSpec:

//Optional: Docker "--shm-size" support. Defines the size of /dev/shm in a Docker-managed cotainer 
//If not defined here Docker uses a default value
//Cannot be updated
ShmSize *resource.Quantity `json:"shmSize,omitempty"`

@ddysher
Copy link
Contributor

ddysher commented Sep 21, 2016

@janosi Did you have a patch for this? we currently hit this issue running db on k8s, would like to have shm size configurable.

@janosi
Copy link
Contributor

janosi commented Sep 22, 2016

@ddysher We are working on it. We send the PRs in the next weeks.

@wstrange
Copy link
Contributor

wstrange commented Oct 5, 2016

Just want to chime in that we are hitting this problem as well

@gjcarneiro
Copy link

gjcarneiro commented Nov 9, 2016

Hi, is there any known workaround for this problem? I need to increase the shmem size to at least 2GB, and I have no idea how.

@janosi
Copy link
Contributor

janosi commented Nov 11, 2016

@ddysher @wstrange @gjcarneiro Please share your use cases with @vishh and @derekwaynecarr on the pull request #34928 They have concerns about extending the API with this shmsize option, and they have different solution proposals. They would like to understand whether users really require this on the API, or shm size could be adjusted by k8s automatically to some calculated value.

@gjcarneiro
Copy link

gjcarneiro commented Nov 11, 2016

My use case is a big shared memory database, typically at a 1 GiB order, but we usually reserve 3 GiB shared memory space just in case it grows. This data is constantly being updated by a writer (a process), and must be made available to readers (other processes). Previously we tried redis server for this, but the performance for this solution was not great, so shared memory it is.

My current workaround is (1) mount a tmpfs volume in /dev/shm, as in this openshift article, and (2) make the writer and reader processes all run in the same container.

@wstrange
Copy link
Contributor

wstrange commented Nov 11, 2016

My use case is an Apache policy agent plugin that allocates a very large (2GB) cache. I worked around it by setting a very low shm value. This is OK for development, but I need a solution for production.

Adjusting shm size dynamically seems tricky. From my perspective, declaring it as a container resource would be fine.

@ddysher
Copy link
Contributor

ddysher commented Nov 11, 2016

My use case is to run database application on top of kubernetes that needs at least (2GB) shared memory. Right now, we just set a large default; it would be nice to have a configurable option.

@vishh
Copy link
Member

vishh commented Nov 11, 2016

@ddysher @wstrange @gjcarneiro Do you applications dynamically adjust their behavior based on the shm size? Will they be able to function if the default size is >= pod's memory limit?

@wstrange
Copy link
Contributor

wstrange commented Nov 11, 2016

The shm size is configurable only when the application starts (i.e. , you can say "only use this much shm").

It can not be adjusted dynamically.

@vishh
Copy link
Member

vishh commented Nov 11, 2016

@wstrange Thanks for clarifying.

@ddysher
Copy link
Contributor

ddysher commented Nov 13, 2016

@vishh We have the same case as @wstrange. shm size doesn't need to be adjusted dynamically.

@gjcarneiro
Copy link

gjcarneiro commented Nov 13, 2016

Same for me, shm size is a constant in a configuration file.

@vishh
Copy link
Member

vishh commented Nov 14, 2016

Great. In that case, kubelet can set the default size of /dev/shm to be that of the pod's memory limit. Apps will have to be configured to use a value that is lesser than the pod's memory limit for shm.

@vishh vishh self-assigned this Nov 14, 2016
@vishh vishh added this to the v1.6 milestone Nov 14, 2016
@elyscape
Copy link
Contributor

elyscape commented Nov 16, 2016

@vishh What about if there is no memory limit imposed on the application? For reference, it looks like Linux defaults to half of the total RAM.

@janosi
Copy link
Contributor

janosi commented Nov 21, 2016

@vishh you can close the PR is you think so.

@fejta-bot
Copy link

fejta-bot commented Feb 28, 2021

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale label Feb 28, 2021
@arunmk
Copy link

arunmk commented Mar 3, 2021

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale label Mar 3, 2021
@fejta-bot
Copy link

fejta-bot commented Jun 1, 2021

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale label Jun 1, 2021
@matti
Copy link

matti commented Jun 1, 2021

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale label Jun 1, 2021
@Kaka1127
Copy link

Kaka1127 commented Jun 18, 2021

Any update on this?

@ehashman
Copy link
Member

ehashman commented Jun 24, 2021

/kind feature

@k8s-ci-robot k8s-ci-robot added the kind/feature label Jun 24, 2021
@kebyn
Copy link

kebyn commented Jul 7, 2021

I tried this method of use.

    volumeMounts:
    - mountPath: /dev/shm
      name: shm
  volumes:
    - name: shm
      emptyDir:
        medium: Memory
        sizeLimit: 5120Mi

@Kaka1127
Copy link

Kaka1127 commented Jul 7, 2021

volumeMounts:
- mountPath: /dev/shm
  name: shm

volumes:
- name: shm
emptyDir:
medium: Memory
sizeLimit: 5120Mi

Unfortunately, this is not worked for me.

@n4j
Copy link
Member

n4j commented Jul 9, 2021

/triage accepted

@k8s-ci-robot k8s-ci-robot added the triage/accepted label Jul 9, 2021
@k8s-triage-robot
Copy link

k8s-triage-robot commented Oct 7, 2021

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale label Oct 7, 2021
@remram44
Copy link

remram44 commented Oct 7, 2021

/remove-lifecycle stale

I didn't think accepted issues would go stale 🤔

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale label Oct 7, 2021
@swapkh91
Copy link

swapkh91 commented Dec 7, 2021

any update on this?
I'm trying to deploy Triton Inference Server using Kserve and need to change shm-size

@counter2015
Copy link

counter2015 commented Dec 27, 2021

I tried this method of use.

    volumeMounts:
    - mountPath: /dev/shm
      name: shm
  volumes:
    - name: shm
      emptyDir:
        medium: Memory
        sizeLimit: 5120Mi

@kebyn thanks, it worked for me.

@remram44
Copy link

remram44 commented Dec 27, 2021

Is this fixed now that #94444 is merged?

@k8s-triage-robot
Copy link

k8s-triage-robot commented Mar 27, 2022

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale label Mar 27, 2022
@mitar
Copy link
Contributor

mitar commented Apr 6, 2022

/remove-lifecycle stale

It would be nice to get a confirmation that this is fixed.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale label Apr 6, 2022
@k8s-triage-robot
Copy link

k8s-triage-robot commented Jul 5, 2022

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale label Jul 5, 2022
@mitar
Copy link
Contributor

mitar commented Jul 5, 2022

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale label Jul 5, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature sig/node triage/accepted
Projects
None yet
Development

Successfully merging a pull request may close this issue.