-
Notifications
You must be signed in to change notification settings - Fork 246
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
/sys/devices/system/cpu/online not container aware #301
Comments
/sys/devices/system/cpu/online
not container aware
Yeah, there is a bug in the generation of the
@brauner can you take a look? For some reason we appear to be hitting some fallback case where we get the host value rather than something which matches our task's cpuset configuration. |
@stgraber Thanks for looking into this. I just noticed the following:
I don't see |
I also find the problem, the following software use
Note: For nginx, it use So, is there a plan to support virtual |
There is support in lxcfs to virtualize /sys/devices/system/cpu/online as can be seen in my listing above on a suitably recent LXCFS, there is however an issue with the way it's rendered. |
@yinhongbo thanks for submitting the PR. Not sure how I can test this in a snap installed version of LXD. I'll try to figure it out, but would appreciate if you could give me some pointers on how to do this. Until then, I'll try poking around the LXD 3.17 snap. |
Still not working. I start lxcfs (latest master branch):
All mounts are ok:
On host:
I do:
Then i start container (name:
And do on host:
SSH into
OK, but:
Not ok :( It's something in cgfs_get_value function in bindings.c i thing - this funtcion is reading the wrong file? |
I think I know the reason for this problem. I am submitting a PR. Notify you after the merger. |
It works very well! Sorry for the gif but it's easier to show me this ;) Thanks for this patch! |
Sounds like this got resolved with #307. |
Determinitic number of CPUs is important for percpu arena to work correctly, since it uses cpu index - sched_getcpu(), and if it will greater then number of CPUs bad thing will happen, or assertion will be failed in debug build: <jemalloc>: ../contrib/jemalloc/src/jemalloc.c:321: Failed assertion: "ind <= narenas_total_get()" Aborted (core dumped) Number of CPUs can be obtained from the following places: - sched_getaffinity() - sysconf(_SC_NPROCESSORS_ONLN) - sysconf(_SC_NPROCESSORS_CONF) For the sched_getaffinity() you may simply use taskset(1) to run program on a different cpu, and in case it will be not first, percpu will work incorrectly, i.e.: $ taskset --cpu-list $(( $(getconf _NPROCESSORS_ONLN)-1 )) <your_program> _SC_NPROCESSORS_ONLN uses /sys/devices/system/cpu/online, LXD/LXC virtualize /sys/devices/system/cpu/online file [1], and so when you run container with limited limits.cpus it will bind randomly selected CPU to it [1]: lxc/lxcfs#301 _SC_NPROCESSORS_CONF uses /sys/devices/system/cpu/cpu*, and AFAIK nobody playing with dentries there. So if all three of these are equal, percpu arenas should work correctly. And a small note regardless _SC_NPROCESSORS_ONLN/_SC_NPROCESSORS_CONF, musl uses sched_getaffinity() for both. So this will also increase the entropy. Also note, that you can check is percpu arena really applied using abort_conf:true. Refs: jemalloc#1939 Refs: ClickHouse/ClickHouse#32806 v2: move malloc_cpu_count_is_deterministic() into malloc_init_hard_recursible() since _SC_NPROCESSORS_CONF does allocations for readdir()
Determinitic number of CPUs is important for percpu arena to work correctly, since it uses cpu index - sched_getcpu(), and if it will greater then number of CPUs bad thing will happen, or assertion will be failed in debug build: <jemalloc>: ../contrib/jemalloc/src/jemalloc.c:321: Failed assertion: "ind <= narenas_total_get()" Aborted (core dumped) Number of CPUs can be obtained from the following places: - sched_getaffinity() - sysconf(_SC_NPROCESSORS_ONLN) - sysconf(_SC_NPROCESSORS_CONF) For the sched_getaffinity() you may simply use taskset(1) to run program on a different cpu, and in case it will be not first, percpu will work incorrectly, i.e.: $ taskset --cpu-list $(( $(getconf _NPROCESSORS_ONLN)-1 )) <your_program> _SC_NPROCESSORS_ONLN uses /sys/devices/system/cpu/online, LXD/LXC virtualize /sys/devices/system/cpu/online file [1], and so when you run container with limited limits.cpus it will bind randomly selected CPU to it [1]: lxc/lxcfs#301 _SC_NPROCESSORS_CONF uses /sys/devices/system/cpu/cpu*, and AFAIK nobody playing with dentries there. So if all three of these are equal, percpu arenas should work correctly. And a small note regardless _SC_NPROCESSORS_ONLN/_SC_NPROCESSORS_CONF, musl uses sched_getaffinity() for both. So this will also increase the entropy. Also note, that you can check is percpu arena really applied using abort_conf:true. Refs: jemalloc#1939 Refs: ClickHouse/ClickHouse#32806 v2: move malloc_cpu_count_is_deterministic() into malloc_init_hard_recursible() since _SC_NPROCESSORS_CONF does allocations for readdir()
Determinitic number of CPUs is important for percpu arena to work correctly, since it uses cpu index - sched_getcpu(), and if it will greater then number of CPUs bad thing will happen, or assertion will be failed in debug build: <jemalloc>: ../contrib/jemalloc/src/jemalloc.c:321: Failed assertion: "ind <= narenas_total_get()" Aborted (core dumped) Number of CPUs can be obtained from the following places: - sched_getaffinity() - sysconf(_SC_NPROCESSORS_ONLN) - sysconf(_SC_NPROCESSORS_CONF) For the sched_getaffinity() you may simply use taskset(1) to run program on a different cpu, and in case it will be not first, percpu will work incorrectly, i.e.: $ taskset --cpu-list $(( $(getconf _NPROCESSORS_ONLN)-1 )) <your_program> _SC_NPROCESSORS_ONLN uses /sys/devices/system/cpu/online, LXD/LXC virtualize /sys/devices/system/cpu/online file [1], and so when you run container with limited limits.cpus it will bind randomly selected CPU to it [1]: lxc/lxcfs#301 _SC_NPROCESSORS_CONF uses /sys/devices/system/cpu/cpu*, and AFAIK nobody playing with dentries there. So if all three of these are equal, percpu arenas should work correctly. And a small note regardless _SC_NPROCESSORS_ONLN/_SC_NPROCESSORS_CONF, musl uses sched_getaffinity() for both. So this will also increase the entropy. Also note, that you can check is percpu arena really applied using abort_conf:true. Refs: jemalloc#1939 Refs: ClickHouse/ClickHouse#32806 v2: move malloc_cpu_count_is_deterministic() into malloc_init_hard_recursible() since _SC_NPROCESSORS_CONF does allocations for readdir() v3: - mark cpu_count_is_deterministic static - check only if percpu arena is enabled - check narenas
Determinitic number of CPUs is important for percpu arena to work correctly, since it uses cpu index - sched_getcpu(), and if it will greater then number of CPUs bad thing will happen, or assertion will be failed in debug build: <jemalloc>: ../contrib/jemalloc/src/jemalloc.c:321: Failed assertion: "ind <= narenas_total_get()" Aborted (core dumped) Number of CPUs can be obtained from the following places: - sched_getaffinity() - sysconf(_SC_NPROCESSORS_ONLN) - sysconf(_SC_NPROCESSORS_CONF) For the sched_getaffinity() you may simply use taskset(1) to run program on a different cpu, and in case it will be not first, percpu will work incorrectly, i.e.: $ taskset --cpu-list $(( $(getconf _NPROCESSORS_ONLN)-1 )) <your_program> _SC_NPROCESSORS_ONLN uses /sys/devices/system/cpu/online, LXD/LXC virtualize /sys/devices/system/cpu/online file [1], and so when you run container with limited limits.cpus it will bind randomly selected CPU to it [1]: lxc/lxcfs#301 _SC_NPROCESSORS_CONF uses /sys/devices/system/cpu/cpu*, and AFAIK nobody playing with dentries there. So if all three of these are equal, percpu arenas should work correctly. And a small note regardless _SC_NPROCESSORS_ONLN/_SC_NPROCESSORS_CONF, musl uses sched_getaffinity() for both. So this will also increase the entropy. Also note, that you can check is percpu arena really applied using abort_conf:true. Refs: jemalloc#1939 Refs: ClickHouse/ClickHouse#32806 v2: move malloc_cpu_count_is_deterministic() into malloc_init_hard_recursible() since _SC_NPROCESSORS_CONF does allocations for readdir() v3: - mark cpu_count_is_deterministic static - check only if percpu arena is enabled - check narenas
Determinitic number of CPUs is important for percpu arena to work correctly, since it uses cpu index - sched_getcpu(), and if it will greater then number of CPUs bad thing will happen, or assertion will be failed in debug build: <jemalloc>: ../contrib/jemalloc/src/jemalloc.c:321: Failed assertion: "ind <= narenas_total_get()" Aborted (core dumped) Number of CPUs can be obtained from the following places: - sched_getaffinity() - sysconf(_SC_NPROCESSORS_ONLN) - sysconf(_SC_NPROCESSORS_CONF) For the sched_getaffinity() you may simply use taskset(1) to run program on a different cpu, and in case it will be not first, percpu will work incorrectly, i.e.: $ taskset --cpu-list $(( $(getconf _NPROCESSORS_ONLN)-1 )) <your_program> _SC_NPROCESSORS_ONLN uses /sys/devices/system/cpu/online, LXD/LXC virtualize /sys/devices/system/cpu/online file [1], and so when you run container with limited limits.cpus it will bind randomly selected CPU to it [1]: lxc/lxcfs#301 _SC_NPROCESSORS_CONF uses /sys/devices/system/cpu/cpu*, and AFAIK nobody playing with dentries there. So if all three of these are equal, percpu arenas should work correctly. And a small note regardless _SC_NPROCESSORS_ONLN/_SC_NPROCESSORS_CONF, musl uses sched_getaffinity() for both. So this will also increase the entropy. Also note, that you can check is percpu arena really applied using abort_conf:true. Refs: #1939 Refs: ClickHouse/ClickHouse#32806 v2: move malloc_cpu_count_is_deterministic() into malloc_init_hard_recursible() since _SC_NPROCESSORS_CONF does allocations for readdir() v3: - mark cpu_count_is_deterministic static - check only if percpu arena is enabled - check narenas
Determinitic number of CPUs is important for percpu arena to work correctly, since it uses cpu index - sched_getcpu(), and if it will greater then number of CPUs bad thing will happen, or assertion will be failed in debug build: <jemalloc>: ../contrib/jemalloc/src/jemalloc.c:321: Failed assertion: "ind <= narenas_total_get()" Aborted (core dumped) Number of CPUs can be obtained from the following places: - sched_getaffinity() - sysconf(_SC_NPROCESSORS_ONLN) - sysconf(_SC_NPROCESSORS_CONF) For the sched_getaffinity() you may simply use taskset(1) to run program on a different cpu, and in case it will be not first, percpu will work incorrectly, i.e.: $ taskset --cpu-list $(( $(getconf _NPROCESSORS_ONLN)-1 )) <your_program> _SC_NPROCESSORS_ONLN uses /sys/devices/system/cpu/online, LXD/LXC virtualize /sys/devices/system/cpu/online file [1], and so when you run container with limited limits.cpus it will bind randomly selected CPU to it [1]: lxc/lxcfs#301 _SC_NPROCESSORS_CONF uses /sys/devices/system/cpu/cpu*, and AFAIK nobody playing with dentries there. So if all three of these are equal, percpu arenas should work correctly. And a small note regardless _SC_NPROCESSORS_ONLN/_SC_NPROCESSORS_CONF, musl uses sched_getaffinity() for both. So this will also increase the entropy. Also note, that you can check is percpu arena really applied using abort_conf:true. Refs: jemalloc#1939 Refs: ClickHouse/ClickHouse#32806 v2: move malloc_cpu_count_is_deterministic() into malloc_init_hard_recursible() since _SC_NPROCESSORS_CONF does allocations for readdir() v3: - mark cpu_count_is_deterministic static - check only if percpu arena is enabled - check narenas
Determinitic number of CPUs is important for percpu arena to work correctly, since it uses cpu index - sched_getcpu(), and if it will greater then number of CPUs bad thing will happen, or assertion will be failed in debug build: <jemalloc>: ../contrib/jemalloc/src/jemalloc.c:321: Failed assertion: "ind <= narenas_total_get()" Aborted (core dumped) Number of CPUs can be obtained from the following places: - sched_getaffinity() - sysconf(_SC_NPROCESSORS_ONLN) - sysconf(_SC_NPROCESSORS_CONF) For the sched_getaffinity() you may simply use taskset(1) to run program on a different cpu, and in case it will be not first, percpu will work incorrectly, i.e.: $ taskset --cpu-list $(( $(getconf _NPROCESSORS_ONLN)-1 )) <your_program> _SC_NPROCESSORS_ONLN uses /sys/devices/system/cpu/online, LXD/LXC virtualize /sys/devices/system/cpu/online file [1], and so when you run container with limited limits.cpus it will bind randomly selected CPU to it [1]: lxc/lxcfs#301 _SC_NPROCESSORS_CONF uses /sys/devices/system/cpu/cpu*, and AFAIK nobody playing with dentries there. So if all three of these are equal, percpu arenas should work correctly. And a small note regardless _SC_NPROCESSORS_ONLN/_SC_NPROCESSORS_CONF, musl uses sched_getaffinity() for both. So this will also increase the entropy. Also note, that you can check is percpu arena really applied using abort_conf:true. Refs: jemalloc#1939 Refs: ClickHouse/ClickHouse#32806 v2: move malloc_cpu_count_is_deterministic() into malloc_init_hard_recursible() since _SC_NPROCESSORS_CONF does allocations for readdir() v3: - mark cpu_count_is_deterministic static - check only if percpu arena is enabled - check narenas
Determinitic number of CPUs is important for percpu arena to work correctly, since it uses cpu index - sched_getcpu(), and if it will greater then number of CPUs bad thing will happen, or assertion will be failed in debug build: <jemalloc>: ../contrib/jemalloc/src/jemalloc.c:321: Failed assertion: "ind <= narenas_total_get()" Aborted (core dumped) Number of CPUs can be obtained from the following places: - sched_getaffinity() - sysconf(_SC_NPROCESSORS_ONLN) - sysconf(_SC_NPROCESSORS_CONF) For the sched_getaffinity() you may simply use taskset(1) to run program on a different cpu, and in case it will be not first, percpu will work incorrectly, i.e.: $ taskset --cpu-list $(( $(getconf _NPROCESSORS_ONLN)-1 )) <your_program> _SC_NPROCESSORS_ONLN uses /sys/devices/system/cpu/online, LXD/LXC virtualize /sys/devices/system/cpu/online file [1], and so when you run container with limited limits.cpus it will bind randomly selected CPU to it [1]: lxc/lxcfs#301 _SC_NPROCESSORS_CONF uses /sys/devices/system/cpu/cpu*, and AFAIK nobody playing with dentries there. So if all three of these are equal, percpu arenas should work correctly. And a small note regardless _SC_NPROCESSORS_ONLN/_SC_NPROCESSORS_CONF, musl uses sched_getaffinity() for both. So this will also increase the entropy. Also note, that you can check is percpu arena really applied using abort_conf:true. Refs: jemalloc/jemalloc#1939 Refs: ClickHouse/ClickHouse#32806 v2: move malloc_cpu_count_is_deterministic() into malloc_init_hard_recursible() since _SC_NPROCESSORS_CONF does allocations for readdir() v3: - mark cpu_count_is_deterministic static - check only if percpu arena is enabled - check narenas
I have noticed that containers see all cores when I inspect
/sys/devices/system/cpu/online
, even though they have been restricted to a single core./proc/cpuinfo
seems to be reporting correctly.For example, I have a container
test
:Which is the expected output. However.
I tested this using the following snap installed versions of LXD:
3.0/stable
channel)3.16/stable
channel)More info from my server:
Seems like there was a merge in May 2019 to address this. The comments on this merge make it seem like this should be enabled by default.
I discovered this while setting up a Kubernetes cluster with Rancher where I used LXD containers as nodes and posted an issue in the rancher repo. Rancher seems to be using
/sys/
to enumerate CPU and memory resources on nodes, which is why I need LXCFS to fix my problem.I also posted about this on the LXD forum.
Please let me know if you need me to provide additional information.
The text was updated successfully, but these errors were encountered: