Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes v1.14.9: watch of *v1.ConfigMap ended with: too old resource version #85723

Closed
weathery opened this issue Nov 28, 2019 · 5 comments
Closed
Labels
kind/support Categorizes issue or PR as a support question. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@weathery
Copy link

weathery commented Nov 28, 2019

What happened:
when upgrade kubernetes from v1.14.8 to v1.14.9, there was so mang log in /var/log/message as follow:

Nov 28 18:11:01 vm01-k8s-01 kubelet: W1128 18:11:01.940609     621 reflector.go:289] object-"kube-system"/"coredns": watch of *v1.ConfigMap ended with: too old resource version: 12891296 (12892348)
Nov 28 18:14:20 vm01-k8s-01 kubelet: W1128 18:14:20.969003     621 reflector.go:289] object-"kube-system"/"kube-proxy": watch of *v1.ConfigMap ended with: too old resource version: 12892331 (12893721)
Nov 28 18:14:30 vm01-k8s-01 kubelet: W1128 18:14:30.950505     621 reflector.go:289] object-"kube-system"/"calico-config": watch of *v1.ConfigMap ended with: too old resource version: 12892151 (12893753)

Nov 28 19:27:37 vm01-k8s-02 kubelet: W1128 19:27:37.230992     650 reflector.go:289] object-"kube-system"/"kube-proxy": watch of *v1.ConfigMap ended with: too old resource version: 12901939 (12902627)
Nov 28 19:35:56 vm01-k8s-02 kubelet: W1128 19:35:56.250097     650 reflector.go:289] object-"kube-system"/"calico-config": watch of *v1.ConfigMap ended with: too old resource version: 12903317 (12903610)
Nov 28 19:41:17 vm01-k8s-02 kubelet: W1128 19:41:17.239832     650 reflector.go:289] object-"kube-system"/"kube-proxy": watch of *v1.ConfigMap ended with: too old resource version: 12904253 (12904268)

Nov 28 19:09:49 vm01-k8s-03 kubelet: W1128 19:09:49.508000     646 reflector.go:289] object-"kube-system"/"kube-proxy": watch of *v1.ConfigMap ended with: too old resource version: 12900037 (12900460)
Nov 28 19:13:02 vm01-k8s-03 kubelet: W1128 19:13:02.649380     646 reflector.go:289] object-"kube-system"/"calico-config": watch of *v1.ConfigMap ended with: too old resource version: 12900692 (12900858)
Nov 28 19:24:34 vm01-k8s-03 kubelet: W1128 19:24:34.517046     646 reflector.go:289] object-"kube-system"/"kube-proxy": watch of *v1.ConfigMap ended with: too old resource version: 12902119 (12902249)

What you expected to happen:
There was no this issue with in cluster kubernetes v1.14.8, while all of components was the same version.

How to reproduce it (as minimally and precisely as possible):
kubernetes v1.14.9 cluster 3 Masters and 2 Nodes + Docker v18.09.9 + Calico v3.10.1 + IPVS + Keepalived v1.3.5

Anything else we need to know?:

# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.14.9
k8s.gcr.io/kube-controller-manager:v1.14.9
k8s.gcr.io/kube-scheduler:v1.14.9
k8s.gcr.io/kube-proxy:v1.14.9
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
# kubectl get nodes -A -o wide
NAME              STATUS   ROLES    AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
vm01-k8s-01       Ready    master   81d   v1.14.9   192.168.1.105   <none>        CentOS Linux 7 (Core)   3.10.0-1062.4.3.el7.x86_64   docker://18.9.9
vm01-k8s-02       Ready    master   80d   v1.14.9   192.168.1.106   <none>        CentOS Linux 7 (Core)   3.10.0-1062.4.3.el7.x86_64   docker://18.9.9
vm01-k8s-03       Ready    master   80d   v1.14.9   192.168.1.107   <none>        CentOS Linux 7 (Core)   3.10.0-1062.4.3.el7.x86_64   docker://18.9.9
vm02-k8s-node-1   Ready    <none>   68d   v1.14.9   192.168.1.120   <none>        CentOS Linux 7 (Core)   3.10.0-1062.4.3.el7.x86_64   docker://18.9.9
vm02-k8s-node-2   Ready    <none>   68d   v1.14.9   192.168.1.121   <none>        CentOS Linux 7 (Core)   3.10.0-1062.4.3.el7.x86_64   docker://18.9.9
# kubectl get pods -A 
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE
default                prometheus-operator-548c6dc45c-mnpb7         1/1     Running   2          2d7h
ingress-nginx          nginx-ingress-controller-ffc5cfd7-4prm2      1/1     Running   1          2d7h
kube-system            calico-kube-controllers-58c76c8fbb-k95gx     1/1     Running   0          117m
kube-system            calico-node-98l8h                            1/1     Running   0          117m
kube-system            calico-node-bd7gp                            1/1     Running   1          117m
kube-system            calico-node-c2mng                            1/1     Running   0          117m
kube-system            calico-node-d6mpw                            1/1     Running   1          117m
kube-system            calico-node-tmb54                            1/1     Running   1          117m
kube-system            calicoctl                                    1/1     Running   0          114m
kube-system            coredns-7b7df549dd-cqtff                     1/1     Running   16         62d
kube-system            coredns-7b7df549dd-n48kd                     1/1     Running   20         69d
kube-system            etcd-vm01-k8s-01                             1/1     Running   3          69d
kube-system            etcd-vm01-k8s-02                             1/1     Running   3       69d
kube-system            etcd-vm01-k8s-03                             1/1     Running   20         69d
kube-system            kube-apiserver-vm01-k8s-01                   1/1     Running   1          5d23h
kube-system            kube-apiserver-vm01-k8s-02                   1/1     Running   1          3d23h
kube-system            kube-apiserver-vm01-k8s-03                   1/1     Running   1          3d23h
kube-system            kube-controller-manager-vm01-k8s-01          1/1     Running   4          5d23h
kube-system            kube-controller-manager-vm01-k8s-02          1/1     Running   1          3d23h
kube-system            kube-controller-manager-vm01-k8s-03          1/1     Running   3          3d23h
kube-system            kube-proxy-49ktf                             1/1     Running   1          5d23h
kube-system            kube-proxy-5f2rx                             1/1     Running   2          5d23h
kube-system            kube-proxy-qnrgj                             1/1     Running   2          5d23h
kube-system            kube-proxy-thfxk                             1/1     Running   2          5d23h
kube-system            kube-proxy-vnm69                             1/1     Running   1          5d23h
kube-system            kube-scheduler-vm01-k8s-01                   1/1     Running   4          5d23h
kube-system            kube-scheduler-vm01-k8s-02                   1/1     Running   3          3d23h
kube-system            kube-scheduler-vm01-k8s-03                   1/1     Running   2          3d23h
kubernetes-dashboard   dashboard-metrics-scraper-5dd8ccf5f8-qv7sq   1/1     Running   1          2d7h
kubernetes-dashboard   kubernetes-dashboard-74867ffb65-srwc2        1/1     Running   1          2d7h
# kubectl get configmaps -A
NAMESPACE              NAME                                 DATA   AGE
ingress-nginx          ingress-controller-leader-nginx      0      2d17h
ingress-nginx          nginx-configuration                  0      3d1h
ingress-nginx          tcp-services                         0      3d1h
ingress-nginx          udp-services                         0      3d1h
kube-public            cluster-info                         1      81d
kube-system            calico-config                        4      118m
kube-system            coredns                              1      81d
kube-system            extension-apiserver-authentication   6      81d
kube-system            kube-proxy                           2      81d
kube-system            kubeadm-config                       2      81d
kube-system            kubelet-config-1.14                  1      81d
kubernetes-dashboard   kubernetes-dashboard-settings        0      2d23h

Environment:

  • Kubernetes version (use kubectl version): v1.14.9
  • Cloud provider or hardware configuration: Intel(R) Xeon(R) CPU E5-2609 v4 @ 1.70GHz * 2vCPU, 4GB Memory, SAS HD 200GB
  • OS (e.g: cat /etc/os-release): CentOS Linux release 7.7.1908 (Core)
  • Kernel (e.g. uname -a): 3.10.0-1062.4.3.el7.x86_64
  • Install tools: kubeadm v1.14.9
  • Network plugin and version (if this is a network-related bug): Calico v3.10.1
  • Others:
# docker version
Client:
 Version:           18.09.9
 API version:       1.39
 Go version:        go1.11.13
 Git commit:        039a7df9ba
 Built:             Wed Sep  4 16:51:21 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.9
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.11.13
  Git commit:       039a7df
  Built:            Wed Sep  4 16:22:32 2019
  OS/Arch:          linux/amd64
  Experimental:     false
# rpm -qa|grep containerd
containerd.io-1.2.10-3.2.el7.x86_64
# ipvsadm -v
ipvsadm v1.27 2008/5/15 (compiled with popt and IPVS v1.2.1)
# rpm -qa|grep keepalived
keepalived-1.3.5-16.el7.x86_64
# ipvsadm -Ln 
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 rr
  -> 192.168.1.105:6443           Masq    1      3          0         
  -> 192.168.1.106:6443           Masq    1      0          0         
  -> 192.168.1.107:6443           Masq    1      1          0         
TCP  10.96.0.10:53 rr
  -> 10.244.171.1:53              Masq    1      0          0         
  -> 10.244.171.2:53              Masq    1      0          0         
TCP  10.96.0.10:9153 rr
  -> 10.244.171.1:9153            Masq    1      0          0         
  -> 10.244.171.2:9153            Masq    1      0          0         
TCP  10.107.171.114:443 rr
  -> 192.168.100.65:8443          Masq    1      0          0         
TCP  10.111.79.235:8000 rr
  -> 192.168.3.65:8000            Masq    1      0          0         
UDP  10.96.0.10:53 rr
  -> 10.244.171.1:53              Masq    1      0          0         
  -> 10.244.171.2:53              Masq    1      0          0         
@weathery weathery added the kind/bug Categorizes issue or PR as related to a bug. label Nov 28, 2019
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Nov 28, 2019
@chenk008
Copy link
Contributor

This is normal log. When informer restart watch , this log may appears

@liggitt
Copy link
Member

liggitt commented Nov 28, 2019

Is this persistent/frequent or intermittent? This is a normal message when watches expire and need to be restarted.

@liggitt liggitt added triage/needs-information Indicates an issue needs more information in order to work on it. kind/support Categorizes issue or PR as a support question. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. and removed kind/bug Categorizes issue or PR as related to a bug. labels Nov 28, 2019
@k8s-ci-robot k8s-ci-robot removed the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Nov 28, 2019
@liggitt
Copy link
Member

liggitt commented Nov 28, 2019

Based on the time stamps, this looks intermittent, which is an expected message.

@liggitt liggitt closed this as completed Nov 28, 2019
@weathery
Copy link
Author

weathery commented Nov 29, 2019

@liggitt @chenk008
Thanks! The issue has been solved when i restarted all of the master‘s OS.

@leon0306
Copy link

@weathery did u solve this probelm?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

5 participants