Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to create pods on k3s-agent running on Gentoo (openrc, arm64) #3815

Closed
samip5 opened this issue Aug 10, 2021 · 4 comments
Closed

Unable to create pods on k3s-agent running on Gentoo (openrc, arm64) #3815

samip5 opened this issue Aug 10, 2021 · 4 comments

Comments

@samip5
Copy link

samip5 commented Aug 10, 2021

Environmental Info:
k3s version v1.21.3+k3s1 (1d1f220)
go version go1.16.6

Node(s) CPU architecture, OS, and Version:
Linux k8s-worker6 5.10.11-v8-p4 #1 SMP PREEMPT Tue Apr 27 18:58:07 -00 2021 aarch64 GNU/Linux
Linux k8s-worker5 5.4.0-1028-raspi #31-Ubuntu SMP PREEMPT Wed Jan 20 11:30:45 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
Linux k8s-worker4 5.4.0-80-generic #90-Ubuntu SMP Fri Jul 9 22:49:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
Linux k8s-worker3 5.4.0-1041-raspi #45-Ubuntu SMP PREEMPT Thu Jul 15 01:17:56 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
Linux k8s-worker2 5.4.0-1041-raspi #45-Ubuntu SMP PREEMPT Thu Jul 15 01:17:56 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
Linux k8s-worker1 5.4.0-1041-raspi #45-Ubuntu SMP PREEMPT Thu Jul 15 01:17:56 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
Linux KubeMaster 5.4.0-1041-raspi #45-Ubuntu SMP PREEMPT Thu Jul 15 01:17:56 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux

Cluster Configuration:

1 server, 6 agents

Describe the bug:

OpenRC agent install fails to create pods due to an error "failed to generate spec: path "/" is mounted on "/" but it is not a shared or slave mount.

Steps To Reproduce:

  • Installed K3s:

Using curl -sfL https://get.k3s.io | K3S_URL=https://192.168.2.9:6443 K3S_TOKEN=<omitted> sh - on Gentoo Linux running on ARM64.

Expected behavior:

I expected it to provision calico pods properly.

Actual behavior:

Pods fail to be generated.

Error: failed to generate container "cc6039d26055b68bd0f329d70a25470dbb932fe4b73b9665af91e562240935ac" spec: failed to generate spec: path "/" is mounted on "/" but it is not a shared or slave mount

Additional context / logs:

E0810 13:48:47.612207    2105 remote_runtime.go:228] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to generate container \"de9df36a5fd9248cc1e8d1838ed8c9620cbd5f7dce70e37638726874a86d33a8\" spec: failed to generate spec: path \"/sys/fs/\" is mounted on \"/sys\" but it is not a shared mount" podSandboxID="4ed3f7ae9368eec9224d499d28dd1e14ba93838cdc0075b6dbb3bcfb7455736f"
E0810 13:48:47.613127    2105 kuberuntime_manager.go:864] container &Container{Name:calico-node,Image:docker.io/calico/node:v3.19.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:calico-config,},Key:calico_backend,Optional:nil,},SecretKeyRef:nil,},},EnvVar{Name:CLUSTER_TYPE,Value:k8s,bgp,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP6,Value:autodetect,ValueFrom:nil,},EnvVar{Name:CALICO_IPV4POOL_IPIP,Value:Never,ValueFrom:nil,},EnvVar{Name:CALICO_IPV4POOL_VXLAN,Value:Never,ValueFrom:nil,},EnvVar{Name:FELIX_IPINIPMTU,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:calico-config,},Key:veth_mtu,Optional:nil,},SecretKeyRef:nil,},},EnvVar{Name:FELIX_VXLANMTU,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:calico-config,},Key:veth_mtu,Optional:nil,},SecretKeyRef:nil,},},EnvVar{Name:FELIX_WIREGUARDMTU,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:calico-config,},Key:veth_mtu,Optional:nil,},SecretKeyRef:nil,},},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{250 -3} {<nil>} 250m DecimalSI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:sysfs,ReadOnly:false,MountPath:/sys/fs/,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,},VolumeMount{Name:cni-log-dir,ReadOnly:true,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ngwqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/calico-node -felix-live -bird-live],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/calico-node -felix-ready -bird-ready],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:kubernetes-services-endpoint,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod calico-node-6q2sp_kube-system(598c51f8-d76a-4372-a024-98972d9080c1): CreateContainerError: failed to generate container "de9df36a5fd9248cc1e8d1838ed8c9620cbd5f7dce70e37638726874a86d33a8" spec: failed to generate spec: path "/sys/fs/" is mounted on "/sys" but it is not a shared mount
E0810 13:48:47.613499    2105 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CreateContainerError: \"failed to generate container \\\"de9df36a5fd9248cc1e8d1838ed8c9620cbd5f7dce70e37638726874a86d33a8\\\" spec: failed to generate spec: path \\\"/sys/fs/\\\" is mounted on \\\"/sys\\\" but it is not a shared mount\"" pod="kube-system/calico-node-6q2sp" podUID=598c51f8-d76a-4372-a024-98972d9080c1
E0810 13:48:59.597003    2105 dns.go:136] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 192.168.10.53 192.168.10.54 2001:67c:1104:fc00::5353"
E0810 13:48:59.597210    2105 dns.go:136] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 192.168.10.53 192.168.10.54 2001:67c:1104:fc00::5353"
E0810 13:48:59.607846    2105 remote_runtime.go:228] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to generate container \"4adab800ca62f568deef1e5ca43cf84bb9c1531c2184203bee507394df8aa2de\" spec: failed to generate spec: path \"/sys/fs/\" is mounted on \"/sys\" but it is not a shared mount" podSandboxID="4ed3f7ae9368eec9224d499d28dd1e14ba93838cdc0075b6dbb3bcfb7455736f"
E0810 13:48:59.608470    2105 kuberuntime_manager.go:864] container &Container{Name:calico-node,Image:docker.io/calico/node:v3.19.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:calico-config,},Key:calico_backend,Optional:nil,},SecretKeyRef:nil,},},EnvVar{Name:CLUSTER_TYPE,Value:k8s,bgp,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP6,Value:autodetect,ValueFrom:nil,},EnvVar{Name:CALICO_IPV4POOL_IPIP,Value:Never,ValueFrom:nil,},EnvVar{Name:CALICO_IPV4POOL_VXLAN,Value:Never,ValueFrom:nil,},EnvVar{Name:FELIX_IPINIPMTU,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:calico-config,},Key:veth_mtu,Optional:nil,},SecretKeyRef:nil,},},EnvVar{Name:FELIX_VXLANMTU,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:calico-config,},Key:veth_mtu,Optional:nil,},SecretKeyRef:nil,},},EnvVar{Name:FELIX_WIREGUARDMTU,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:calico-config,},Key:veth_mtu,Optional:nil,},SecretKeyRef:nil,},},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{250 -3} {<nil>} 250m DecimalSI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:sysfs,ReadOnly:false,MountPath:/sys/fs/,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,},VolumeMount{Name:cni-log-dir,ReadOnly:true,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ngwqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/calico-node -felix-live -bird-live],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/calico-node -felix-ready -bird-ready],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:kubernetes-services-endpoint,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod calico-node-6q2sp_kube-system(598c51f8-d76a-4372-a024-98972d9080c1): CreateContainerError: failed to generate container "4adab800ca62f568deef1e5ca43cf84bb9c1531c2184203bee507394df8aa2de" spec: failed to generate spec: path "/sys/fs/" is mounted on "/sys" but it is not a shared mount
E0810 13:48:59.608819    2105 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CreateContainerError: \"failed to generate container \\\"4adab800ca62f568deef1e5ca43cf84bb9c1531c2184203bee507394df8aa2de\\\" spec: failed to generate spec: path \\\"/sys/fs/\\\" is mounted on \\\"/sys\\\" but it is not a shared mount\"" pod="kube-system/calico-node-6q2sp" podUID=598c51f8-d76a-4372-a024-98972d9080c1
E0810 13:48:59.612109    2105 remote_runtime.go:228] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to generate container \"94437b70d1d5ca4b8648e56657584822fc8d697015aa4b634af373df48541b1a\" spec: failed to generate spec: path \"/\" is mounted on \"/\" but it is not a shared or slave mount" podSandboxID="d2351d317fbc26ac20d5b76a28d929cf94170255cc5fbf04936bcf3eb31b058c"
E0810 13:48:59.612706    2105 kuberuntime_manager.go:864] container &Container{Name:node-exporter,Image:quay.io/prometheus/node-exporter:v1.2.0,Command:[],Args:[--path.procfs=/host/proc --path.sysfs=/host/sys --path.rootfs=/host/root --web.listen-address=$(HOST_IP):9100 --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/.+)($|/) --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9100,ContainerPort:9100,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:HOST_IP,Value:0.0.0.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proc,ReadOnly:true,MountPath:/host/proc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:sys,ReadOnly:true,MountPath:/host/sys,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:root,ReadOnly:true,MountPath:/host/root,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 9100 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 9100 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod node-exporter-j8xh6_monitoring(9e7a705a-3cd8-4c3d-a163-2cf361fde865): CreateContainerError: failed to generate container "94437b70d1d5ca4b8648e56657584822fc8d697015aa4b634af373df48541b1a" spec: failed to generate spec: path "/" is mounted on "/" but it is not a shared or slave mount
E0810 13:48:59.612941    2105 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-exporter\" with CreateContainerError: \"failed to generate container \\\"94437b70d1d5ca4b8648e56657584822fc8d697015aa4b634af373df48541b1a\\\" spec: failed to generate spec: path \\\"/\\\" is mounted on \\\"/\\\" but it is not a shared or slave mount\"" pod="monitoring/node-exporter-j8xh6" podUID=9e7a705a-3cd8-4c3d-a163-2cf361fde865
E0810 13:49:10.598380    2105 dns.go:136] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 192.168.10.53 192.168.10.54 2001:67c:1104:fc00::5353"
E0810 13:49:10.598413    2105 dns.go:136] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 192.168.10.53 192.168.10.54 2001:67c:1104:fc00::5353"
E0810 13:49:10.614966    2105 remote_runtime.go:228] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to generate container \"6e156c59ea38f53169a29ea7452f4c2412a2390314688d793cf057c62d102497\" spec: failed to generate spec: path \"/\" is mounted on \"/\" but it is not a shared or slave mount" podSandboxID="d2351d317fbc26ac20d5b76a28d929cf94170255cc5fbf04936bcf3eb31b058c"
E0810 13:49:10.615511    2105 kuberuntime_manager.go:864] container &Container{Name:node-exporter,Image:quay.io/prometheus/node-exporter:v1.2.0,Command:[],Args:[--path.procfs=/host/proc --path.sysfs=/host/sys --path.rootfs=/host/root --web.listen-address=$(HOST_IP):9100 --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/.+)($|/) --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9100,ContainerPort:9100,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:HOST_IP,Value:0.0.0.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proc,ReadOnly:true,MountPath:/host/proc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:sys,ReadOnly:true,MountPath:/host/sys,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:root,ReadOnly:true,MountPath:/host/root,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 9100 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 9100 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod node-exporter-j8xh6_monitoring(9e7a705a-3cd8-4c3d-a163-2cf361fde865): CreateContainerError: failed to generate container "6e156c59ea38f53169a29ea7452f4c2412a2390314688d793cf057c62d102497" spec: failed to generate spec: path "/" is mounted on "/" but it is not a shared or slave mount
@samip5 samip5 changed the title Unable to create pods on k3s-agent running on Gentoo (openrc) Unable to create pods on k3s-agent running on Gentoo (openrc, arm64) Aug 10, 2021
@brandond
Copy link
Contributor

@samip5
Copy link
Author

samip5 commented Aug 10, 2021

openshift/origin#11314 (comment)

How should this be able to persist? Is this an upstream issue somewhere on Gentoo?

@brandond
Copy link
Contributor

It appears to be an openrc issue?

@stale
Copy link

stale bot commented Feb 6, 2022

This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 180 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions.

@stale stale bot added the status/stale label Feb 6, 2022
@samip5 samip5 closed this as completed Feb 6, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants