Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Labels are not synced when using --target-namespace, --fake-nodes=false and --sync-all-nodes=false #214

Closed
olljanat opened this issue Nov 30, 2021 · 4 comments

Comments

@olljanat
Copy link
Contributor

It looks to be that combination of --target-namespace , --fake-nodes=false and --sync-all-nodes=true works fine but when I change to --sync-all-nodes=false then those nodes gets created without any labels at all.

@FabianKramm
Copy link
Member

@olljanat thanks for creating this issue! Seems like there is a bug where vcluster cannot find the current pod because of the --target-namespace option which results in the node controller failing over and over and not updating the labels. We'll fix this for the next version

@olljanat
Copy link
Contributor Author

olljanat commented Dec 1, 2021

Labels looks to be there now on v0.5.0-alpha.4 but I just noticed that a lot of other details are missing too.

Here is some comparisons with and without --target-namespace

Wide view:

$ kubectl get nodes -o wide
NAME            STATUS    ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE    KERNEL-VERSION   CONTAINER-RUNTIME
k8s-test        Unknown   <none>   95s             <none>        <none>        <unknown>   <unknown>        <unknown>

$ kubectl get nodes -o wide
NAME            STATUS   ROLES    AGE   VERSION          INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k8s-test        Ready    <none>   3s    v1.22.3+rke2r1   10.3.154.243   <none>        Ubuntu 20.04.3 LTS   5.4.0-88-generic   containerd://1.5.7-k3s2

YAML diff (image list was longer but other why full diff):

$ diff -u node-ok.yaml node_not_ok.yaml | more
--- node-ok.yaml        2021-12-01 15:10:51.733590463 +0000
+++ node_not_ok.yaml    2021-12-01 15:09:47.473682924 +0000
@@ -12,7 +12,7 @@
     rke2.io/node-config-hash: S5ZCQYHBQFTCWE257O3XJG2KSNASLRR2YUPTB2B5JWIIKSQYH5TA====
     rke2.io/node-env: '{}'
     volumes.kubernetes.io/controller-managed-attach-detach: "true"
-  creationTimestamp: "2021-12-01T15:08:43Z"
+  creationTimestamp: "2021-12-01T15:05:31Z"
   labels:
     beta.kubernetes.io/arch: amd64
     beta.kubernetes.io/instance-type: rke2
@@ -26,185 +26,24 @@
   name: k8s-test-1
-  resourceVersion: "354"
-  uid: 8a13a066-39cc-4308-aa46-c742cb1efa27
+  resourceVersion: "345"
+  uid: 0f869e7d-4584-4a4d-8300-d6a6926901e9
 spec:
-  podCIDR: 10.6.2.0/24
-  podCIDRs:
-  - 10.6.2.0/24
-  providerID: rke2://k8s-test-1
+  taints:
+  - effect: NoSchedule
+    key: node.kubernetes.io/not-ready
 status:
-  addresses:
-  - address: 10.3.154.243
-    type: InternalIP
-  allocatable:
-    cpu: 4120m
-    ephemeral-storage: "49255941901"
-    hugepages-1Gi: "0"
-    hugepages-2Mi: "0"
-    memory: 16159560Ki
-    pods: "110"
-  capacity:
-    cpu: "6"
-    ephemeral-storage: 50633164Ki
-    hugepages-1Gi: "0"
-    hugepages-2Mi: "0"
-    memory: 16343880Ki
-    pods: "110"
-  conditions:
-  - lastHeartbeatTime: "2021-11-30T14:17:24Z"
-    lastTransitionTime: "2021-11-30T14:17:24Z"
-    message: Calico is running on this node
-    reason: CalicoIsUp
-    status: "False"
-    type: NetworkUnavailable
-  - lastHeartbeatTime: "2021-12-01T15:08:12Z"
-    lastTransitionTime: "2021-11-30T14:17:08Z"
-    message: kubelet has sufficient memory available
-    reason: KubeletHasSufficientMemory
-    status: "False"
-    type: MemoryPressure
-  - lastHeartbeatTime: "2021-12-01T15:08:12Z"
-    lastTransitionTime: "2021-11-30T14:17:08Z"
-    message: kubelet has no disk pressure
-    reason: KubeletHasNoDiskPressure
-    status: "False"
-    type: DiskPressure
-  - lastHeartbeatTime: "2021-12-01T15:08:12Z"
-    lastTransitionTime: "2021-11-30T14:17:08Z"
-    message: kubelet has sufficient PID available
-    reason: KubeletHasSufficientPID
-    status: "False"
-    type: PIDPressure
-  - lastHeartbeatTime: "2021-12-01T15:08:12Z"
-    lastTransitionTime: "2021-11-30T14:17:29Z"
-    message: kubelet is posting ready status. AppArmor enabled
-    reason: KubeletReady
-    status: "True"
-    type: Ready
   daemonEndpoints:
     kubeletEndpoint:
-      Port: 10250
-  images:
-  - names:
-    - docker.io/rancher/nginx-ingress-controller@sha256:a2b197240b16b72ad784f5ec98b10d9d0b9d94ea6b60f207bae556fa475a9f03
-    - docker.io/rancher/nginx-ingress-controller:nginx-1.0.2-hardened1
-    sizeBytes: 228330259
+      Port: 0
   nodeInfo:
-    architecture: amd64
-    bootID: e5270bf2-3738-4e8c-9321-f4239469f7e7
-    containerRuntimeVersion: containerd://1.5.7-k3s2
-    kernelVersion: 5.4.0-88-generic
-    kubeProxyVersion: v1.22.3+rke2r1
-    kubeletVersion: v1.22.3+rke2r1
-    machineID: 99a0dff27e7b4ce182acef3e441a7aa7
-    operatingSystem: linux
-    osImage: Ubuntu 20.04.3 LTS
-    systemUUID: 99a0dff2-7e7b-4ce1-82ac-ef3e441a7aa7
-  volumesAttached:
-  - devicePath: ""
-    name: kubernetes.io/csi/driver.longhorn.io^pvc-193b53c4-58cb-438d-b8ad-3e833d410798
-  - devicePath: ""
-    name: kubernetes.io/csi/driver.longhorn.io^pvc-9b877180-7fb5-4fa1-a71a-5d3bc251036a
-  volumesInUse:
-  - kubernetes.io/csi/driver.longhorn.io^pvc-193b53c4-58cb-438d-b8ad-3e833d410798
-  - kubernetes.io/csi/driver.longhorn.io^pvc-9b877180-7fb5-4fa1-a71a-5d3bc251036a
+    architecture: ""
+    bootID: ""
+    containerRuntimeVersion: ""
+    kernelVersion: ""
+    kubeProxyVersion: ""
+    kubeletVersion: ""
+    machineID: ""
+    operatingSystem: ""
+    osImage: ""
+    systemUUID: ""

@FabianKramm
Copy link
Member

@olljanat yes you are correct, but this should be fixed with v0.5.0-alpha.5

@olljanat
Copy link
Contributor Author

olljanat commented Dec 3, 2021

Confirmed that works correctly with v0.5.0-alpha.5. Thanks 👍

@olljanat olljanat closed this as completed Dec 3, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants