Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kernel.shm* settings to work in lxc containers #989

Closed
matt4d617474 opened this issue Apr 28, 2016 · 17 comments
Closed

kernel.shm* settings to work in lxc containers #989

matt4d617474 opened this issue Apr 28, 2016 · 17 comments

Comments

@matt4d617474
Copy link

Per https://github.com/lxc/lxd/issues/1351, stgraber mentioned that LXC should allow a rw bindmount to these /proc/sys/kernel/shm* tunables. At present, writing to them yields a read-only fs error message.

Alternatively, if the physical host setting would be passed down (for shmmax; it's not currently), that would also ease a lot of my troubles with older PostgreSQL installations.

Thanks much!

Matt

@brauner
Copy link
Member

brauner commented Feb 9, 2018

LXC master has support for all /proc/sys kernel tunables through:

lxc.sysctl.[kernel parameters name]
              Specify  the kernel parameters to be set. The parameters available are those listed under /proc/sys/.  Note that not all sysctls are
              namespaced. Changing Non-namespaced sysctls will cause the system-wide setting to be modified.  sysctl(8).  If used with  no  value,
              lxc will clear the parameters specified up to this point.

So this can be closed.

@brauner brauner closed this as completed Feb 9, 2018
@johaven
Copy link

johaven commented Dec 18, 2018

Is it implemented since lxc 3.0 ?

vm 100 - unable to parse config: lxc.sysctl.kernel.shmmax: 17179869184

@brauner
Copy link
Member

brauner commented Dec 18, 2018

Are you using LXD or LXC if LXD, please show:

lxc info

and the container's config.

@johaven
Copy link

johaven commented Dec 18, 2018

LXC from Proxmox 5.3-1 (no lxc command)

root@pvx:/etc/pve/lxc# dpkg -l | grep lxc
ii  lxc-pve                              3.0.2+pve1-5                   amd64        Linux containers usersapce tools
ii  lxcfs                                3.0.2-2                        amd64        LXC userspace filesystem

Container config:

vm 100 - unable to parse config: lxc.sysctl.kernel.shmmax: 17179869184
arch: amd64
cores: 2
hostname: gitlab
memory: 2048
net0: name=eth0,bridge=vmbr0,firewall=1,gw=x.x.x.x,hwaddr=00:00:00:00:aa:fc,ip=x.x.x.x/32,type=veth
onboot: 1
ostype: centos
rootfs: local-data:vm-100-disk-0,size=8G
swap: 2048

@brauner
Copy link
Member

brauner commented Dec 18, 2018

@Blub?

@Blub
Copy link
Member

Blub commented Jan 4, 2019

See https://bugzilla.proxmox.com/show_bug.cgi?id=1785, sorry for the delay, preparing a patch for this...

@okuryan
Copy link

okuryan commented May 29, 2019

@Blub , we do not see any patch and no related commits? And bug is marked as resolved and this issue is also marked as closed. Where is patch?

@okuryan
Copy link

okuryan commented May 29, 2019

@Blub , just to clarify, I'm using PROXMOXX 5.4-4 with LXC. I have a VM and I manually add in file /var/lib/lxc/109/config the following parameter:

lxc.sysctl.kernel.shmmax = 17179869184

But after VM restart - this parameter disappears automatically!

@bugz8unny69
Copy link

Hello there,

That can be explained easily, don't place the config there! That location is volatile, for persistence, It should be written to /etc/pve/lxc/109.conf!

Regards

@johaven
Copy link

johaven commented May 29, 2019

If i put lxc.sysctl.kernel.shmmax = 17179869184 in my 105.conf file, the container failed to start.

This is not working and i got the last proxmox update (Linux pvx-center 4.15.18-14-pve - SMP PVE 4.15.18-39)

@bugz8unny69
Copy link

Report the issue on the PVE forums? I have proxmox 5.3 but IIRC, the latest one is 5.5-6 or 5.6. Not to sure. Looking at the bug report, the patch is somewhere on the dev ml, I would have to search for it. And if I reading the report correctly, the affected component is pve-common.

Regards

@bugz8unny69
Copy link

bugz8unny69 commented May 29, 2019

Also:

https://bugzilla.proxmox.com/show_bug.cgi?id=1785#c5

There's a patch on the list for this now. Note that the lxc.sysctl.* keys require write access to /proc/sys so they only work with lxc.mount.auto: proc:rw.

From my understanding, for lxc.sysctl.kernel.shmmax to work, you need to include the above.

Cheers

@okuryan
Copy link

okuryan commented May 30, 2019

@LHorace , but doing lxc.mount.auto: proc:rw will make global kernel config editable from individual VM
That means modification of kernel configs that anybody can do on one individual VM can break the whole virtualization system and all our 50+ servers will just stop working.

So that recommendation doesn't make sense at all.

We just need this configs to be applied on individual VM without affecting other VMs
Does LXC virtualization even supports this?

@bugz8unny69
Copy link

@okuryan

individual VM

You mean containers?

We just need this configs to be applied on individual VM without affecting other VMs
Does LXC virtualization even supports this?

Again, you mean containers right? LXC is not virtualization technology but a containment technology. IDK, isn't /proc namespace ? Besides, this issue and proxmox bug report is about enabling support for lxc.sysctl.*. If that's not what you want, perhaps you should open a new issue? You can also raise the issue on the Forums or on the proxmox bug report? Lastly, what happens if you don't put lxc.mount.auto: proc:rw but still include you lxc.sysctl.* settings, what happens then? If you up2date and still having problems starting the container, then raise a bug report at http://bugzilla.proxmox.com/ ?

@okuryan
Copy link

okuryan commented May 30, 2019

@LHorace , thanks for response.
Yes, when I'm saying VM, I refer to container. Sorry for my terminology, I'm not so experienced in this things

Lastly, what happens if you don't put lxc.mount.auto: proc:rw but still include you lxc.sysctl.* settings, what happens then?

And to answer your question, if not putting proc:rw option and put lxc.sysctl.* setting, container will just fail to start, because it cannot set this option as sysctl configs are readonly. And that doesn't make sense to me, as why I want to make it rw, just to put predefined configs? Anyway I do not want users to manually edit kernel options inside container themselves.
So you re saying this is proxmox bug? If yes than I will immediately submit it to proxmox team.

@okuryan
Copy link

okuryan commented May 30, 2019

@LHorace one more point. As I know , LXC is shared kernel solution. So maybe what I'm saying doesn't make sense at all, because all those sysctl options are global and applied to host machine always? Just because LXC works like this? Maybe I just need to switch to KVM and that is it?

@bugz8unny69
Copy link

bugz8unny69 commented May 30, 2019

And to answer your question, if not putting proc:rw option and put lxc.sysctl.* setting

I like to clarify two things here:

  1. With proc:rw, only the container itself is affected
  2. Isn't root required inside the container to make those changes?

So maybe what I'm saying doesn't make sense at all, because all those sysctl options are global and applied to host machine always?

My understanding is /proc is namespace'd, developers could opt in and clarify this if my assumption is incorrect. So only the container itself should be affected by changes in /proc. http://man7.org/linux/man-pages/man7/pid_namespaces.7.html didn't shed any light on this.

container will just fail to start, because it cannot set this option as sysctl configs are readonly. And that doesn't make sense to me, as why I want to make it rw, just to put predefined configs?
So you re saying this is proxmox bug? If yes than I will immediately submit it to proxmox team.

Reviewing what you posted, seems like a limitation be it the Kernel or LXC. In this case, is unrelated to this issue, perhaps you should open a new issue describing the problem with LXC developers? A bug report with proxmox with link issue on github could help sort all this out for you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

6 participants