Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

frequent leader changes for components: kube-controller-manager, kube-scheduler, etcd #2295

Closed
kombiwagon opened this issue Oct 23, 2020 · 12 comments

Comments

@kombiwagon
Copy link

kombiwagon commented Oct 23, 2020

RKE version: v1.1.3

Docker version: (docker version,docker info preferred) 19.03.11

Operating system and kernel: (cat /etc/os-release, uname -r preferred) CentOS Linux 7, 5.7.4-1.el7.elrepo.x86_64

Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO) Bare-metal

cluster.yml file:

nodes:
- address: master1
  port: "22"
  internal_address: ""
  role:
  - controlplane
  - etcd
  hostname_override: ""
  user: myuser
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ~/.ssh/id_rsa
  ssh_cert: ""
  ssh_cert_path: ""
  labels: {}
  taints: []
- address: master2
  port: "22"
  internal_address: ""
  role:
  - controlplane
  - etcd
  hostname_override: ""
  user: myuser
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ~/.ssh/id_rsa
  ssh_cert: ""
  ssh_cert_path: ""
  labels: {}
  taints: []
- address: master3
  port: "22"
  internal_address: ""
  role:
  - controlplane
  - etcd
  hostname_override: ""
  user: myuser
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ~/.ssh/id_rsa
  ssh_cert: ""
  ssh_cert_path: ""
  labels: {}
  taints: []
- address: worker1
  port: "22"
  internal_address: ""
  role:
  - worker
  hostname_override: ""
  user: myuser
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ~/.ssh/id_rsa
  ssh_cert: ""
  ssh_cert_path: ""
  labels: {}
  taints: []
- address: worker2
  port: "22"
  internal_address: ""
  role:
  - worker
  hostname_override: ""
  user: myuser
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ~/.ssh/id_rsa
  ssh_cert: ""
  ssh_cert_path: ""
  labels: {}
  taints: []
- address: worker3
  port: "22"
  internal_address: ""
  role:
  - worker
  hostname_override: ""
  user: myuser
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ~/.ssh/id_rsa
  ssh_cert: ""
  ssh_cert_path: ""
  labels: {}
  taints: []
services:
  etcd:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    external_urls: []
    ca_cert: ""
    cert: ""
    key: ""
    path: ""
    uid: 0
    gid: 0
    snapshot: null
    retention: ""
    creation: ""
    backup_config: null
  kube-api:
    image: ""
    extra_args:
      default-not-ready-toleration-seconds: 30
      default-unreachable-toleration-seconds: 30
    extra_binds: []
    extra_env: []
    service_cluster_ip_range: 10.43.0.0/16
    service_node_port_range: ""
    pod_security_policy: false
    always_pull_images: false
    secrets_encryption_config: null
    audit_log: null
    admission_configuration: null
    event_rate_limit: null
  kube-controller:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    cluster_cidr: 10.42.0.0/16
    service_cluster_ip_range: 10.43.0.0/16
  scheduler:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
  kubelet:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    cluster_domain: cluster.local
    infra_container_image: ""
    cluster_dns_server: 10.43.0.10
    fail_swap_on: false
    generate_serving_certificate: false
  kubeproxy:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
network:
  plugin: calico
  options: {}
  mtu: 0
  node_selector: {}
  update_strategy: null
authentication:
  strategy: x509
  sans:
    - "IP"
  webhook: null
addons: ""
addons_include: []
system_images:
  etcd: rancher/coreos-etcd:v3.4.3-rancher1
  alpine: rancher/rke-tools:v0.1.58
  nginx_proxy: rancher/rke-tools:v0.1.58
  cert_downloader: rancher/rke-tools:v0.1.58
  kubernetes_services_sidecar: rancher/rke-tools:v0.1.58
  kubedns: rancher/dns-kube-dns:1.15.2
  dnsmasq: rancher/dns-dnsmasq-nanny:1.15.2
  kubedns_sidecar: rancher/dns-sidecar:1.15.2
  kubedns_autoscaler: rancher/cluster-proportional-autoscaler:1.7.1
  coredns: rancher/coredns-coredns:1.6.9
  coredns_autoscaler: rancher/cluster-proportional-autoscaler:1.7.1
  nodelocal: rancher/dns-node-cache:1.15.7
  kubernetes: rancher/hyperkube:v1.18.3-rancher2
  flannel: rancher/coreos-flannel:v0.12.0
  flannel_cni: rancher/flannel-cni:v0.3.0-rancher6
  calico_node: rancher/calico-node:v3.13.4
  calico_cni: rancher/calico-cni:v3.13.4
  calico_controllers: rancher/calico-kube-controllers:v3.13.4
  calico_ctl: rancher/calico-ctl:v3.13.4
  calico_flexvol: rancher/calico-pod2daemon-flexvol:v3.13.4
  canal_node: rancher/calico-node:v3.13.4
  canal_cni: rancher/calico-cni:v3.13.4
  canal_flannel: rancher/coreos-flannel:v0.12.0
  canal_flexvol: rancher/calico-pod2daemon-flexvol:v3.13.4
  weave_node: weaveworks/weave-kube:2.6.4
  weave_cni: weaveworks/weave-npc:2.6.4
  pod_infra_container: rancher/pause:3.1
  ingress: rancher/nginx-ingress-controller:nginx-0.32.0-rancher1
  ingress_backend: rancher/nginx-ingress-controller-defaultbackend:1.5-rancher1
  metrics_server: rancher/metrics-server:v0.3.6
  windows_pod_infra_container: rancher/kubelet-pause:v0.1.4
ssh_key_path: ~/.ssh/id_rsa
ssh_cert_path: ""
ssh_agent_auth: false
authorization:
  mode: rbac
  options: {}
ignore_docker_version: null
kubernetes_version: ""
private_registries: []
ingress:
  provider: ""
  options: {}
  node_selector: {}
  extra_args: {}
  dns_policy: ""
  extra_envs: []
  extra_volumes: []
  extra_volume_mounts: []
  update_strategy: null
cluster_name: ""
cloud_provider:
  name: ""
prefix_path: ""
addon_job_timeout: 0
bastion_host:
  address: ""
  port: ""
  user: ""
  ssh_key: ""
  ssh_key_path: ""
  ssh_cert: ""
  ssh_cert_path: ""
monitoring:
  provider: ""
  options: {}
  node_selector: {}
  update_strategy: null
  replicas: null
restore:
  restore: false
  snapshot_name: ""
dns: null

Steps to Reproduce:

Results:
I have noticed very frequent 'leader election' changes for components: kube-controller-manager, kube-scheduler, etcd
it takes place even every few minutes. sometimes every few hours. when component lose leadership, next it is restarted. Is it normal ?
In most cases this situation daesn't impact on cluster status. But sometimes yes, becouse switching new leader takes to much time (only sometimes). After 100 days I have above 1000 leaderTransitions ...

Name:         kube-controller-manager
Namespace:    kube-system
Labels:       <none>
Annotations:  control-plane.alpha.kubernetes.io/leader:
{"holderIdentity":"master1_XXX","leaseDurationSeconds":15,"acquireTime":"XXX","renewTime":"XXX","**leaderTransitions":1148**}

Name:         kube-scheduler
Namespace:    kube-system
Labels:       <none>
Annotations:  control-plane.alpha.kubernetes.io/leader:
{"holderIdentity":"master1_XXX","leaseDurationSeconds":15,"acquireTime":"XXX","renewTime":"XXX",**"leaderTransitions":1114**}

I have noticed repeated error inside pods: calico-kube-controllers
Perhaps it has something to do with the problem:

[ERROR][1]main.go 234: Failed to reach apiserver error=<nil>
[ERROR][1]client.go 255: Error getting cluster information config ClusterInformation="default" error=Get https://10.43.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded
[ERROR][1] main.go 203: Failed to verify datastore error=Get https://10.43.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded

what could be causing this ??
thanks in advance

@kombiwagon kombiwagon changed the title very frequent 'leader election' changes for components: kube-controller-manager, kube-scheduler, etcd frequent leader changes for components: kube-controller-manager, kube-scheduler, etcd Oct 23, 2020
@superseb
Copy link
Contributor

Please share logs from the following containers:

etcd
kube-apiserver
kube-controller-manager
kube-scheduler

And some more info on the setup:

  • How many nodes with what roles
  • What are the specs per node role (CPU/memory/disk type and/or iops/network)
  • How many resources are in the cluster

@kombiwagon
Copy link
Author

kombiwagon commented Oct 26, 2020

NAME      STATUS   ROLES               AGE    VERSION
master1   Ready    controlplane,etcd   124d   v1.18.3
master2   Ready    controlplane,etcd   124d   v1.18.3
master3   Ready    controlplane,etcd   124d   v1.18.3
worker1   Ready    worker              124d   v1.18.3
worker2   Ready    worker              124d   v1.18.3
worker3   Ready    worker              124d   v1.18.3

masters: CPU 4xCORES , 8GB RAM, HDD
workers: CPU 8xCORES , 8GB RAM, HDD

In the output of top, wa (IO Wait) sometimes is high for all cores 'wa >1'
maybe I should send more info, let me know.

kube-scheduler:

I1026 05:25:23.053830       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-scheduler...
I1026 06:56:44.056955       1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
E1026 08:34:37.404893       1 leaderelection.go:320] error retrieving resource lock kube-system/kube-scheduler: Get https://127.0.0.1:6443/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1026 08:34:37.405205       1 leaderelection.go:277] failed to renew lease kube-system/kube-scheduler: timed out waiting for the condition
F1026 08:34:37.405307       1 server.go:244] leaderelection lost

kube-controller-manager:

I1026 08:34:19.396577       1 pv_controller.go:1285] isVolumeReleased[pvc-942c3781-4d64-4400-bef6-17245b952501]: volume is released
I1026 08:34:19.396578       1 pv_controller.go:1285] isVolumeReleased[pvc-ebbf754f-b75d-4db0-a36a-c9a841880978]: volume is released
I1026 08:34:34.395715       1 pv_controller.go:1285] isVolumeReleased[pvc-539338cb-c9d8-4a99-be6a-cd833c0d56ff]: volume is released
I1026 08:34:34.590594       1 pv_controller.go:1285] isVolumeReleased[pvc-942c3781-4d64-4400-bef6-17245b952501]: volume is released
E1026 08:34:38.631183       1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: Get https://127.0.0.1:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: context deadline exceeded
I1026 08:34:38.631339       1 leaderelection.go:277] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition
F1026 08:34:38.631871       1 controllermanager.go:279] leaderelection lost

etcd:

2020-10-26 08:32:29.944142 W | etcdserver: request "header:<ID:3269741670948984653 username:\"system:node\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/default/cluster.local-nfs-client-XXX-nfs-client-provisioner\" mod_revision:48872890 > success:<request_put:<key:\"/registry/services/endpoints/default/cluster.local-nfs-client-XXX-nfs-client-provisioner\" value_size:593 >> failure:<request_range:<key:\"/registry/services/endpoints/default/cluster.local-nfs-client-XXX-nfs-client-provisioner\" > >>" with result "size:20" took too long (118.060302ms) to execute
2020-10-26 08:32:29.944688 W | etcdserver: read-only range request "key:\"/registry/horizontalpodautoscalers\" range_end:\"/registry/horizontalpodautoscalert\" count_only:true " with result "range_response_count:0 size:7" took too long (196.841216ms) to execute
2020-10-26 08:32:36.222225 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/coredns-autoscaler\" " with result "range_response_count:1 size:399" took too long (176.572399ms) to execute
2020-10-26 08:32:52.424400 W | etcdserver: request "header:<ID:3269741670948984653 username:\"system:node\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/XXX.XX.XX.XX\" mod_revision:48872950 > success:<request_put:<key:\"/registry/masterleases/XXX.XX.XX.XX\" value_size:67 lease:8376260882345452251 >> failure:<>>" with result "size:20" took too long (200.741636ms) to execute
2020-10-26 08:32:52.424966 W | etcdserver: read-only range request "key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" " with result "range_response_count:3 size:402" took too long (188.183026ms) to execute
2020-10-26 08:33:03.648248 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:615" took too long (153.260316ms) to execute
2020-10-26 08:33:13.931031 W | etcdserver: read-only range request "key:\"/registry/crd.projectcalico.org/clusterinformations/default\" " with result "range_response_count:1 size:922" took too long (216.21219ms) to execute
2020-10-26 08:33:13.932139 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings\" range_end:\"/registry/clusterrolebindingt\" count_only:true " with result "range_response_count:0 size:10" took too long (193.240518ms) to execute
2020-10-26 08:33:18.132007 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/cattle-controllers\" " with result "range_response_count:1 size:556" took too long (403.130734ms) to execute
2020-10-26 08:33:18.132592 W | etcdserver: read-only range request "key:\"/registry/namespaces\" range_end:\"/registry/namespacet\" count_only:true " with result "range_response_count:0 size:9" took too long (316.85277ms) to execute
2020-10-26 08:33:18.132804 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " with result "range_response_count:1 size:588" took too long (215.630134ms) to execute
2020-10-26 08:33:23.870670 W | etcdserver: read-only range request "key:\"/registry/management.cattle.io/templates\" range_end:\"/registry/management.cattle.io/templatet\" count_only:true " with result "range_response_count:0 size:7" took too long (352.133478ms) to execute
2020-10-26 08:33:23.871518 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/cattle-controllers\" " with result "range_response_count:1 size:556" took too long (314.69516ms) to execute
2020-10-26 08:33:24.694763 I | mvcc: store.index: compact 48871809
2020-10-26 08:33:25.142610 I | mvcc: finished scheduled compaction at 48871809 (took 428.087748ms)
2020-10-26 08:33:38.751330 W | etcdserver: request "header:<ID:17940217457181847986 username:\"system:node\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/configmaps/kube-system/cattle-controllers\" mod_revision:48873192 > success:<request_put:<key:\"/registry/configmaps/kube-system/cattle-controllers\" value_size:475 >> failure:<request_range:<key:\"/registry/configmaps/kube-system/cattle-controllers\" > >>" with result "size:20" took too long (239.550314ms) to execute
2020-10-26 08:33:39.099631 W | etcdserver: read-only range request "key:\"/registry/management.cattle.io/groups\" range_end:\"/registry/management.cattle.io/groupt\" count_only:true " with result "range_response_count:0 size:7" took too long (130.491598ms) to execute
2020-10-26 08:33:39.661801 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/cattle-controllers\" " with result "range_response_count:1 size:556" took too long (220.133551ms) to execute
2020-10-26 08:34:18.288440 W | etcdserver: request "header:<ID:8376260882345452816 username:\"system:node\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/kube-controller-manager\" mod_revision:48873347 > success:<request_put:<key:\"/registry/leases/kube-system/kube-controller-manager\" value_size:436 >> failure:<>>" with result "size:20" took too long (217.200619ms) to execute
2020-10-26 08:34:23.748768 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:615" took too long (142.957319ms) to execute
2020-10-26 08:34:23.749828 W | etcdserver: read-only range request "key:\"/registry/management.cattle.io/nodes\" range_end:\"/registry/management.cattle.io/nodet\" count_only:true " with result "range_response_count:0 size:9" took too long (125.358979ms) to execute
2020-10-26 08:34:26.652855 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/coredns\" " with result "range_response_count:1 size:7432" took too long (246.778793ms) to execute
2020-10-26 08:34:26.974139 W | etcdserver: read-only range request "key:\"/registry/configmaps/ingress-nginx/ingress-controller-leader-nginx\" " with result "range_response_count:1 size:607" took too long (188.1939ms) to execute
2020-10-26 08:34:34.588108 W | etcdserver: read-only range request "key:\"/registry/management.cattle.io/projectroletemplatebindings\" range_end:\"/registry/management.cattle.io/projectroletemplatebindingt\" count_only:true " with result "range_response_count:0 size:9" took too long (187.49541ms) to execute
2020-10-26 08:34:34.588499 W | etcdserver: read-only range request "key:\"/registry/persistentvolumes/pvc-942c3781-4d64-4400-bef6-17245b952501\" " with result "range_response_count:1 size:1205" took too long (196.979894ms) to execute
2020-10-26 08:34:45.594364 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:522" took too long (234.56523ms) to execute
2020-10-26 08:34:45.595338 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:7" took too long (222.205934ms) to execute
2020-10-26 08:35:11.205156 W | etcdserver: read-only range request "key:\"/registry/management.cattle.io/catalogtemplates/cattle-global-data/helm-sonarqube\" " with result "range_response_count:1 size:26282" took too long (170.477234ms) to execute
2020-10-26 08:35:11.205684 W | etcdserver: read-only range request "key:\"/registry/certificatesigningrequests\" range_end:\"/registry/certificatesigningrequestt\" count_only:true " with result "range_response_count:0 size:7" took too long (159.128929ms) to execute
2020-10-26 08:35:13.482776 W | etcdserver: read-only range request "key:\"/registry/management.cattle.io/catalogtemplates/cattle-global-data/helm-spinnaker\" " with result "range_response_count:1 size:29373" took too long (198.051166ms) to execute
2020-10-26 08:35:13.485024 W | etcdserver: read-only range request "key:\"/registry/management.cattle.io/clusterscans\" range_end:\"/registry/management.cattle.io/clusterscant\" count_only:true " with result "range_response_count:0 size:7" took too long (188.424376ms) to execute
2020-10-26 08:35:23.435212 W | etcdserver: read-only range request "key:\"/registry/management.cattle.io/catalogtemplates/cattle-global-data/ibm-charts-ibm-calico-bgp-peer\" " with result "range_response_count:1 size:1911" took too long (100.244792ms) to execute
2020-10-26 08:35:23.888928 W | etcdserver: read-only range request "key:\"/registry/controllerrevisions\" range_end:\"/registry/controllerrevisiont\" count_only:true " with result "range_response_count:0 size:9" took too long (210.280973ms) to execute
2020-10-26 08:35:23.889847 W | etcdserver: read-only range request "key:\"/registry/management.cattle.io/catalogtemplates/cattle-global-data/helm-sumologic-fluentd\" " with result "range_response_count:1 size:10133" took too long (104.984845ms) to execute
2020-10-26 08:35:30.589252 W | etcdserver: read-only range request "key:\"/registry/management.cattle.io/catalogtemplates/cattle-global-data/helm-uchiwa\" " with result "range_response_count:1 size:3949" took too long (204.322854ms) to execute
2020-10-26 08:35:34.338552 W | etcdserver: read-only range request "key:\"/registry/management.cattle.io/catalogtemplates/cattle-global-data/helm-hubot\" " with result "range_response_count:1 size:2248" took too long (199.69475ms) to execute
2020-10-26 08:35:54.770382 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/cluster.local-nfs-client-XXX-nfs-client-provisioner\" " with result "range_response_count:1 size:708" took too long (420.791566ms) to execute
2020-10-26 08:35:54.770618 W | etcdserver: request "header:<ID:3269741670948986186 username:\"system:node\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" mod_revision:48873754 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" value_size:521 >> failure:<>>" with result "size:20" took too long (307.265437ms) to execute
2020-10-26 08:35:54.771922 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:518" took too long (301.111473ms) to execute
2020-10-26 08:35:57.490809 W | etcdserver: read-only range request "key:\"/registry/replicasets\" range_end:\"/registry/replicasett\" count_only:true " with result "range_response_count:0 size:10" took too long (211.326334ms) to execute
2020-10-26 08:36:12.312611 W | etcdserver: request "header:<ID:3269741670948986314 username:\"system:node\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/master2\" mod_revision:48873796 > success:<request_put:<key:\"/registry/leases/kube-node-lease/master2\" value_size:544 >> failure:<>>" with result "size:20" took too long (207.94316ms) to execute
2020-10-26 08:36:13.427184 W | etcdserver: request "header:<ID:17940217457181849363 username:\"system:node\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" mod_revision:48873834 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" value_size:521 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" > >>" with result "size:20" took too long (126.11913ms) to execute
2020-10-26 08:36:15.033784 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:615" took too long (213.672491ms) to execute
2020-10-26 08:36:19.687739 W | etcdserver: read-only range request "key:\"/registry/configmaps/ingress-nginx/ingress-controller-leader-nginx\" " with result "range_response_count:1 size:607" took too long (172.14346ms) to execute
2020-10-26 08:36:41.667840 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " with result "range_response_count:1 size:588" took too long (398.918724ms) to execute

kube-apiserver: (|grep error)

W1026 08:28:49.219395       1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted
W1026 08:31:03.911361       1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted
W1026 08:31:58.675621       1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted
E1026 08:34:39.673783       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
W1026 08:45:29.780252       1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted
W1026 08:46:51.233510       1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted
W1026 08:48:25.266265       1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted

@superseb
Copy link
Contributor

If etcd is continuously logging took too long, it might be disk performance. What kind and type disk are you using? You can find more information on https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/hardware.md#disks. You can find the metrics using

curl -s -L --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) $(docker exec etcd printenv ETCDCTL_ENDPOINTS)/metrics | grep ^etcd

@kombiwagon
Copy link
Author

kombiwagon commented Oct 27, 2020

What exactly should i pay attention to here?
it's hard for me to say more about disks (HDD) because they are virtual machines (KVM).

Output from fio test:
fio --rw=write --ioengine=sync --fdatasync=1 --directory=/root/fio_test --size=22m --bs=2300 --name=mytest

mytest: (g=0): rw=write, bs=(R) 2300B-2300B, (W) 2300B-2300B, (T) 2300B-2300B, ioengine=sync, iodepth=1
fio-3.7
Starting 1 process
mytest: Laying out IO file (1 file / 22MiB)
Jobs: 1 (f=1): [W(1)][100.0%][r=0KiB/s,w=85KiB/s][r=0,w=38 IOPS][eta 00m:00s]
mytest: (groupid=0, jobs=1): err= 0: pid=12949: Tue Oct 27 10:43:25 2020
write: IOPS=25, BW=58.4KiB/s (59.8kB/s)(21.0MiB/385997msec)
clat (usec): min=11, max=6200, avg=34.99, stdev=90.80
lat (usec): min=12, max=6201, avg=35.82, stdev=90.82
clat percentiles (usec):
| 1.00th=[ 21], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 25],
| 30.00th=[ 27], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 33],
| 70.00th=[ 35], 80.00th=[ 39], 90.00th=[ 49], 95.00th=[ 58],
| 99.00th=[ 80], 99.50th=[ 96], 99.90th=[ 190], 99.95th=[ 255],
| 99.99th=[ 5276]
bw ( KiB/s): min= 4, max= 305, per=100.00%, avg=60.47, stdev=41.95, samples=736
iops : min= 1, max= 136, avg=27.11, stdev=18.66, samples=736
lat (usec) : 20=0.92%, 50=90.17%, 100=8.52%, 250=0.34%, 500=0.02%
lat (usec) : 750=0.01%
lat (msec) : 4=0.01%, 10=0.02%
fsync/fdatasync/sync_file_range:
sync (usec): min=1558, max=1095.1k, avg=38443.05, stdev=83013.09
sync percentiles (msec):
| 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 8], 20.00th=[ 11],
| 30.00th=[ 14], 40.00th=[ 17], 50.00th=[ 21], 60.00th=[ 24],
| 70.00th=[ 30], 80.00th=[ 36], 90.00th=[ 49], 95.00th=[ 120],
| 99.00th=[ 485], 99.50th=[ 617], 99.90th=[ 877], 99.95th=[ 969],
| 99.99th=[ 1053]
cpu : usr=0.05%, sys=0.30%, ctx=30909, majf=0, minf=14
IO depths : 1=200.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,10029,0,0 short=10029,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
WRITE: bw=58.4KiB/s (59.8kB/s), 58.4KiB/s-58.4KiB/s (59.8kB/s-59.8kB/s), io=21.0MiB (23.1MB), run=385997-385997msec

Disk stats (read/write):
sda: ios=976/32560, merge=1/4320, ticks=146144/730703, in_queue=1141934, util=87.02%

ETCD metrics
etcd_cluster_version{cluster_version="3.4"} 1
etcd_debugging_disk_backend_commit_rebalance_duration_seconds_bucket{le="0.001"} 6.716772e+06
etcd_debugging_disk_backend_commit_rebalance_duration_seconds_bucket{le="0.002"} 6.716935e+06
etcd_debugging_disk_backend_commit_rebalance_duration_seconds_bucket{le="0.004"} 6.716968e+06
etcd_debugging_disk_backend_commit_rebalance_duration_seconds_bucket{le="0.008"} 6.716983e+06
etcd_debugging_disk_backend_commit_rebalance_duration_seconds_bucket{le="0.016"} 6.716983e+06
etcd_debugging_disk_backend_commit_rebalance_duration_seconds_bucket{le="0.032"} 6.716983e+06
etcd_debugging_disk_backend_commit_rebalance_duration_seconds_bucket{le="0.064"} 6.716983e+06
etcd_debugging_disk_backend_commit_rebalance_duration_seconds_bucket{le="0.128"} 6.716983e+06
etcd_debugging_disk_backend_commit_rebalance_duration_seconds_bucket{le="0.256"} 6.716983e+06
etcd_debugging_disk_backend_commit_rebalance_duration_seconds_bucket{le="0.512"} 6.716983e+06
etcd_debugging_disk_backend_commit_rebalance_duration_seconds_bucket{le="1.024"} 6.716983e+06
etcd_debugging_disk_backend_commit_rebalance_duration_seconds_bucket{le="2.048"} 6.716983e+06
etcd_debugging_disk_backend_commit_rebalance_duration_seconds_bucket{le="4.096"} 6.716983e+06
etcd_debugging_disk_backend_commit_rebalance_duration_seconds_bucket{le="8.192"} 6.716983e+06
etcd_debugging_disk_backend_commit_rebalance_duration_seconds_bucket{le="+Inf"} 6.716983e+06
etcd_debugging_disk_backend_commit_rebalance_duration_seconds_sum 8.825191708999952
etcd_debugging_disk_backend_commit_rebalance_duration_seconds_count 6.716983e+06
etcd_debugging_disk_backend_commit_spill_duration_seconds_bucket{le="0.001"} 6.70612e+06
etcd_debugging_disk_backend_commit_spill_duration_seconds_bucket{le="0.002"} 6.711612e+06
etcd_debugging_disk_backend_commit_spill_duration_seconds_bucket{le="0.004"} 6.714747e+06
etcd_debugging_disk_backend_commit_spill_duration_seconds_bucket{le="0.008"} 6.716707e+06
etcd_debugging_disk_backend_commit_spill_duration_seconds_bucket{le="0.016"} 6.716927e+06
etcd_debugging_disk_backend_commit_spill_duration_seconds_bucket{le="0.032"} 6.716977e+06
etcd_debugging_disk_backend_commit_spill_duration_seconds_bucket{le="0.064"} 6.716983e+06
etcd_debugging_disk_backend_commit_spill_duration_seconds_bucket{le="0.128"} 6.716983e+06
etcd_debugging_disk_backend_commit_spill_duration_seconds_bucket{le="0.256"} 6.716983e+06
etcd_debugging_disk_backend_commit_spill_duration_seconds_bucket{le="0.512"} 6.716983e+06
etcd_debugging_disk_backend_commit_spill_duration_seconds_bucket{le="1.024"} 6.716983e+06
etcd_debugging_disk_backend_commit_spill_duration_seconds_bucket{le="2.048"} 6.716983e+06
etcd_debugging_disk_backend_commit_spill_duration_seconds_bucket{le="4.096"} 6.716983e+06
etcd_debugging_disk_backend_commit_spill_duration_seconds_bucket{le="8.192"} 6.716983e+06
etcd_debugging_disk_backend_commit_spill_duration_seconds_bucket{le="+Inf"} 6.716983e+06
etcd_debugging_disk_backend_commit_spill_duration_seconds_sum 547.0933000719555
etcd_debugging_disk_backend_commit_spill_duration_seconds_count 6.716983e+06
etcd_debugging_disk_backend_commit_write_duration_seconds_bucket{le="0.001"} 0
etcd_debugging_disk_backend_commit_write_duration_seconds_bucket{le="0.002"} 0
etcd_debugging_disk_backend_commit_write_duration_seconds_bucket{le="0.004"} 16118
etcd_debugging_disk_backend_commit_write_duration_seconds_bucket{le="0.008"} 4.730628e+06
etcd_debugging_disk_backend_commit_write_duration_seconds_bucket{le="0.016"} 5.432267e+06
etcd_debugging_disk_backend_commit_write_duration_seconds_bucket{le="0.032"} 6.127068e+06
etcd_debugging_disk_backend_commit_write_duration_seconds_bucket{le="0.064"} 6.508673e+06
etcd_debugging_disk_backend_commit_write_duration_seconds_bucket{le="0.128"} 6.555724e+06
etcd_debugging_disk_backend_commit_write_duration_seconds_bucket{le="0.256"} 6.636224e+06
etcd_debugging_disk_backend_commit_write_duration_seconds_bucket{le="0.512"} 6.689043e+06
etcd_debugging_disk_backend_commit_write_duration_seconds_bucket{le="1.024"} 6.712747e+06
etcd_debugging_disk_backend_commit_write_duration_seconds_bucket{le="2.048"} 6.716085e+06
etcd_debugging_disk_backend_commit_write_duration_seconds_bucket{le="4.096"} 6.716702e+06
etcd_debugging_disk_backend_commit_write_duration_seconds_bucket{le="8.192"} 6.716885e+06
etcd_debugging_disk_backend_commit_write_duration_seconds_bucket{le="+Inf"} 6.716983e+06
etcd_debugging_disk_backend_commit_write_duration_seconds_sum 128938.07532263851
etcd_debugging_disk_backend_commit_write_duration_seconds_count 6.716983e+06
etcd_debugging_lease_granted_total 668351
etcd_debugging_lease_renewed_total 0
etcd_debugging_lease_revoked_total 668435
etcd_debugging_lease_ttl_total_bucket{le="1"} 0
etcd_debugging_lease_ttl_total_bucket{le="2"} 0
etcd_debugging_lease_ttl_total_bucket{le="4"} 0
etcd_debugging_lease_ttl_total_bucket{le="8"} 0
etcd_debugging_lease_ttl_total_bucket{le="16"} 666623
etcd_debugging_lease_ttl_total_bucket{le="32"} 666623
etcd_debugging_lease_ttl_total_bucket{le="64"} 666623
etcd_debugging_lease_ttl_total_bucket{le="128"} 666623
etcd_debugging_lease_ttl_total_bucket{le="256"} 666623
etcd_debugging_lease_ttl_total_bucket{le="512"} 666623
etcd_debugging_lease_ttl_total_bucket{le="1024"} 666623
etcd_debugging_lease_ttl_total_bucket{le="2048"} 666623
etcd_debugging_lease_ttl_total_bucket{le="4096"} 668351
etcd_debugging_lease_ttl_total_bucket{le="8192"} 668351
etcd_debugging_lease_ttl_total_bucket{le="16384"} 668351
etcd_debugging_lease_ttl_total_bucket{le="32768"} 668351
etcd_debugging_lease_ttl_total_bucket{le="65536"} 668351
etcd_debugging_lease_ttl_total_bucket{le="131072"} 668351
etcd_debugging_lease_ttl_total_bucket{le="262144"} 668351
etcd_debugging_lease_ttl_total_bucket{le="524288"} 668351
etcd_debugging_lease_ttl_total_bucket{le="1.048576e+06"} 668351
etcd_debugging_lease_ttl_total_bucket{le="2.097152e+06"} 668351
etcd_debugging_lease_ttl_total_bucket{le="4.194304e+06"} 668351
etcd_debugging_lease_ttl_total_bucket{le="8.388608e+06"} 668351
etcd_debugging_lease_ttl_total_bucket{le="+Inf"} 668351
etcd_debugging_lease_ttl_total_sum 1.6323825e+07
etcd_debugging_lease_ttl_total_count 668351
etcd_debugging_mvcc_compact_revision 4.9211581e+07
etcd_debugging_mvcc_current_revision 4.9213095e+07
etcd_debugging_mvcc_db_compaction_keys_total 0
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_bucket{le="1"} 118599
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_bucket{le="2"} 158768
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_bucket{le="4"} 169302
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_bucket{le="8"} 177131
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_bucket{le="16"} 182535
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_bucket{le="32"} 186968
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_bucket{le="64"} 190780
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_bucket{le="128"} 191534
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_bucket{le="256"} 192008
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_bucket{le="512"} 192361
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_bucket{le="1024"} 192560
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_bucket{le="2048"} 192593
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_bucket{le="4096"} 192609
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_bucket{le="+Inf"} 192613
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_sum 1.123765e+06
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_count 192613
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_bucket{le="100"} 0
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_bucket{le="200"} 0
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_bucket{le="400"} 5599
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_bucket{le="800"} 7058
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_bucket{le="1600"} 7344
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_bucket{le="3200"} 7396
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_bucket{le="6400"} 7404
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_bucket{le="12800"} 7407
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_bucket{le="25600"} 7408
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_bucket{le="51200"} 7408
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_bucket{le="102400"} 7408
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_bucket{le="204800"} 7408
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_bucket{le="409600"} 7408
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_bucket{le="819200"} 7408
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_bucket{le="+Inf"} 7408
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_sum 3.213845e+06
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_count 7408
etcd_debugging_mvcc_db_total_size_in_bytes 9.2463104e+07
etcd_debugging_mvcc_delete_total 27461
etcd_debugging_mvcc_events_total 1.967434e+06
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_bucket{le="0.5"} 0
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_bucket{le="1"} 0
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_bucket{le="2"} 0
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_bucket{le="4"} 0
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_bucket{le="8"} 53
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_bucket{le="16"} 5272
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_bucket{le="32"} 7347
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_bucket{le="64"} 7406
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_bucket{le="128"} 7408
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_bucket{le="256"} 7408
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_bucket{le="512"} 7408
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_bucket{le="1024"} 7408
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_bucket{le="2048"} 7408
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_bucket{le="4096"} 7408
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_bucket{le="+Inf"} 7408
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_sum 117353
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_count 7408
etcd_debugging_mvcc_keys_total 25444
etcd_debugging_mvcc_pending_events_total 0
etcd_debugging_mvcc_put_total 9.914822e+06
etcd_debugging_mvcc_range_total 2.2713633e+07
etcd_debugging_mvcc_slow_watcher_total 0
etcd_debugging_mvcc_txn_total 1178
etcd_debugging_mvcc_watch_stream_total 51
etcd_debugging_mvcc_watcher_total 51
etcd_debugging_server_lease_expired_total 161504
etcd_debugging_snap_save_marshalling_duration_seconds_bucket{le="0.001"} 120
etcd_debugging_snap_save_marshalling_duration_seconds_bucket{le="0.002"} 120
etcd_debugging_snap_save_marshalling_duration_seconds_bucket{le="0.004"} 120
etcd_debugging_snap_save_marshalling_duration_seconds_bucket{le="0.008"} 120
etcd_debugging_snap_save_marshalling_duration_seconds_bucket{le="0.016"} 120
etcd_debugging_snap_save_marshalling_duration_seconds_bucket{le="0.032"} 120
etcd_debugging_snap_save_marshalling_duration_seconds_bucket{le="0.064"} 120
etcd_debugging_snap_save_marshalling_duration_seconds_bucket{le="0.128"} 120
etcd_debugging_snap_save_marshalling_duration_seconds_bucket{le="0.256"} 120
etcd_debugging_snap_save_marshalling_duration_seconds_bucket{le="0.512"} 120
etcd_debugging_snap_save_marshalling_duration_seconds_bucket{le="1.024"} 120
etcd_debugging_snap_save_marshalling_duration_seconds_bucket{le="2.048"} 120
etcd_debugging_snap_save_marshalling_duration_seconds_bucket{le="4.096"} 120
etcd_debugging_snap_save_marshalling_duration_seconds_bucket{le="8.192"} 120
etcd_debugging_snap_save_marshalling_duration_seconds_bucket{le="+Inf"} 120
etcd_debugging_snap_save_marshalling_duration_seconds_sum 0.005191642
etcd_debugging_snap_save_marshalling_duration_seconds_count 120
etcd_debugging_snap_save_total_duration_seconds_bucket{le="0.001"} 0
etcd_debugging_snap_save_total_duration_seconds_bucket{le="0.002"} 0
etcd_debugging_snap_save_total_duration_seconds_bucket{le="0.004"} 0
etcd_debugging_snap_save_total_duration_seconds_bucket{le="0.008"} 93
etcd_debugging_snap_save_total_duration_seconds_bucket{le="0.016"} 105
etcd_debugging_snap_save_total_duration_seconds_bucket{le="0.032"} 111
etcd_debugging_snap_save_total_duration_seconds_bucket{le="0.064"} 120
etcd_debugging_snap_save_total_duration_seconds_bucket{le="0.128"} 120
etcd_debugging_snap_save_total_duration_seconds_bucket{le="0.256"} 120
etcd_debugging_snap_save_total_duration_seconds_bucket{le="0.512"} 120
etcd_debugging_snap_save_total_duration_seconds_bucket{le="1.024"} 120
etcd_debugging_snap_save_total_duration_seconds_bucket{le="2.048"} 120
etcd_debugging_snap_save_total_duration_seconds_bucket{le="4.096"} 120
etcd_debugging_snap_save_total_duration_seconds_bucket{le="8.192"} 120
etcd_debugging_snap_save_total_duration_seconds_bucket{le="+Inf"} 120
etcd_debugging_snap_save_total_duration_seconds_sum 1.2058936850000002
etcd_debugging_snap_save_total_duration_seconds_count 120
etcd_debugging_store_expires_total 0
etcd_debugging_store_reads_total{action="get"} 671549
etcd_debugging_store_reads_total{action="getRecursive"} 2
etcd_debugging_store_watch_requests_total 0
etcd_debugging_store_watchers 0
etcd_debugging_store_writes_total{action="set"} 1
etcd_disk_backend_commit_duration_seconds_bucket{le="0.001"} 0
etcd_disk_backend_commit_duration_seconds_bucket{le="0.002"} 0
etcd_disk_backend_commit_duration_seconds_bucket{le="0.004"} 11514
etcd_disk_backend_commit_duration_seconds_bucket{le="0.008"} 4.7121e+06
etcd_disk_backend_commit_duration_seconds_bucket{le="0.016"} 5.42683e+06
etcd_disk_backend_commit_duration_seconds_bucket{le="0.032"} 6.123127e+06
etcd_disk_backend_commit_duration_seconds_bucket{le="0.064"} 6.5084e+06
etcd_disk_backend_commit_duration_seconds_bucket{le="0.128"} 6.555657e+06
etcd_disk_backend_commit_duration_seconds_bucket{le="0.256"} 6.636167e+06
etcd_disk_backend_commit_duration_seconds_bucket{le="0.512"} 6.689024e+06
etcd_disk_backend_commit_duration_seconds_bucket{le="1.024"} 6.712746e+06
etcd_disk_backend_commit_duration_seconds_bucket{le="2.048"} 6.716085e+06
etcd_disk_backend_commit_duration_seconds_bucket{le="4.096"} 6.716701e+06
etcd_disk_backend_commit_duration_seconds_bucket{le="8.192"} 6.716885e+06
etcd_disk_backend_commit_duration_seconds_bucket{le="+Inf"} 6.716983e+06
etcd_disk_backend_commit_duration_seconds_sum 129577.6749580871
etcd_disk_backend_commit_duration_seconds_count 6.716983e+06
etcd_disk_backend_defrag_duration_seconds_bucket{le="0.1"} 0
etcd_disk_backend_defrag_duration_seconds_bucket{le="0.2"} 0
etcd_disk_backend_defrag_duration_seconds_bucket{le="0.4"} 0
etcd_disk_backend_defrag_duration_seconds_bucket{le="0.8"} 0
etcd_disk_backend_defrag_duration_seconds_bucket{le="1.6"} 0
etcd_disk_backend_defrag_duration_seconds_bucket{le="3.2"} 0
etcd_disk_backend_defrag_duration_seconds_bucket{le="6.4"} 0
etcd_disk_backend_defrag_duration_seconds_bucket{le="12.8"} 0
etcd_disk_backend_defrag_duration_seconds_bucket{le="25.6"} 0
etcd_disk_backend_defrag_duration_seconds_bucket{le="51.2"} 0
etcd_disk_backend_defrag_duration_seconds_bucket{le="102.4"} 0
etcd_disk_backend_defrag_duration_seconds_bucket{le="204.8"} 0
etcd_disk_backend_defrag_duration_seconds_bucket{le="409.6"} 0
etcd_disk_backend_defrag_duration_seconds_bucket{le="+Inf"} 0
etcd_disk_backend_defrag_duration_seconds_sum 0
etcd_disk_backend_defrag_duration_seconds_count 0
etcd_disk_backend_snapshot_duration_seconds_bucket{le="0.01"} 0
etcd_disk_backend_snapshot_duration_seconds_bucket{le="0.02"} 0
etcd_disk_backend_snapshot_duration_seconds_bucket{le="0.04"} 0
etcd_disk_backend_snapshot_duration_seconds_bucket{le="0.08"} 0
etcd_disk_backend_snapshot_duration_seconds_bucket{le="0.16"} 0
etcd_disk_backend_snapshot_duration_seconds_bucket{le="0.32"} 0
etcd_disk_backend_snapshot_duration_seconds_bucket{le="0.64"} 0
etcd_disk_backend_snapshot_duration_seconds_bucket{le="1.28"} 0
etcd_disk_backend_snapshot_duration_seconds_bucket{le="2.56"} 51
etcd_disk_backend_snapshot_duration_seconds_bucket{le="5.12"} 51
etcd_disk_backend_snapshot_duration_seconds_bucket{le="10.24"} 51
etcd_disk_backend_snapshot_duration_seconds_bucket{le="20.48"} 51
etcd_disk_backend_snapshot_duration_seconds_bucket{le="40.96"} 51
etcd_disk_backend_snapshot_duration_seconds_bucket{le="81.92"} 51
etcd_disk_backend_snapshot_duration_seconds_bucket{le="163.84"} 51
etcd_disk_backend_snapshot_duration_seconds_bucket{le="327.68"} 51
etcd_disk_backend_snapshot_duration_seconds_bucket{le="655.36"} 51
etcd_disk_backend_snapshot_duration_seconds_bucket{le="+Inf"} 51
etcd_disk_backend_snapshot_duration_seconds_sum 77.75367145099999
etcd_disk_backend_snapshot_duration_seconds_count 51
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.001"} 0
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.002"} 208222
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.004"} 7.031156e+06
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.008"} 9.340418e+06
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.016"} 1.0630151e+07
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.032"} 1.1344916e+07
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.064"} 1.1505489e+07
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.128"} 1.1577606e+07
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.256"} 1.1680821e+07
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.512"} 1.1747596e+07
etcd_disk_wal_fsync_duration_seconds_bucket{le="1.024"} 1.1771849e+07
etcd_disk_wal_fsync_duration_seconds_bucket{le="2.048"} 1.1774509e+07
etcd_disk_wal_fsync_duration_seconds_bucket{le="4.096"} 1.1775099e+07
etcd_disk_wal_fsync_duration_seconds_bucket{le="8.192"} 1.1775282e+07
etcd_disk_wal_fsync_duration_seconds_bucket{le="+Inf"} 1.1775397e+07
etcd_disk_wal_fsync_duration_seconds_sum 140588.59175589486
etcd_disk_wal_fsync_duration_seconds_count 1.1775397e+07
etcd_grpc_proxy_cache_hits_total 0
etcd_grpc_proxy_cache_keys_total 0
etcd_grpc_proxy_cache_misses_total 0
etcd_grpc_proxy_events_coalescing_total 0
etcd_grpc_proxy_watchers_coalescing_total 0
etcd_mvcc_db_open_read_transactions 1
etcd_mvcc_db_total_size_in_bytes 9.2463104e+07
etcd_mvcc_db_total_size_in_use_in_bytes 7.5935744e+07
etcd_mvcc_delete_total 27461
etcd_mvcc_hash_duration_seconds_bucket{le="0.01"} 0
etcd_mvcc_hash_duration_seconds_bucket{le="0.02"} 0
etcd_mvcc_hash_duration_seconds_bucket{le="0.04"} 0
etcd_mvcc_hash_duration_seconds_bucket{le="0.08"} 0
etcd_mvcc_hash_duration_seconds_bucket{le="0.16"} 0
etcd_mvcc_hash_duration_seconds_bucket{le="0.32"} 0
etcd_mvcc_hash_duration_seconds_bucket{le="0.64"} 0
etcd_mvcc_hash_duration_seconds_bucket{le="1.28"} 0
etcd_mvcc_hash_duration_seconds_bucket{le="2.56"} 0
etcd_mvcc_hash_duration_seconds_bucket{le="5.12"} 0
etcd_mvcc_hash_duration_seconds_bucket{le="10.24"} 0
etcd_mvcc_hash_duration_seconds_bucket{le="20.48"} 0
etcd_mvcc_hash_duration_seconds_bucket{le="40.96"} 0
etcd_mvcc_hash_duration_seconds_bucket{le="81.92"} 0
etcd_mvcc_hash_duration_seconds_bucket{le="163.84"} 0
etcd_mvcc_hash_duration_seconds_bucket{le="+Inf"} 0
etcd_mvcc_hash_duration_seconds_sum 0
etcd_mvcc_hash_duration_seconds_count 0
etcd_mvcc_hash_rev_duration_seconds_bucket{le="0.01"} 0
etcd_mvcc_hash_rev_duration_seconds_bucket{le="0.02"} 0
etcd_mvcc_hash_rev_duration_seconds_bucket{le="0.04"} 0
etcd_mvcc_hash_rev_duration_seconds_bucket{le="0.08"} 0
etcd_mvcc_hash_rev_duration_seconds_bucket{le="0.16"} 0
etcd_mvcc_hash_rev_duration_seconds_bucket{le="0.32"} 0
etcd_mvcc_hash_rev_duration_seconds_bucket{le="0.64"} 0
etcd_mvcc_hash_rev_duration_seconds_bucket{le="1.28"} 0
etcd_mvcc_hash_rev_duration_seconds_bucket{le="2.56"} 0
etcd_mvcc_hash_rev_duration_seconds_bucket{le="5.12"} 0
etcd_mvcc_hash_rev_duration_seconds_bucket{le="10.24"} 0
etcd_mvcc_hash_rev_duration_seconds_bucket{le="20.48"} 0
etcd_mvcc_hash_rev_duration_seconds_bucket{le="40.96"} 0
etcd_mvcc_hash_rev_duration_seconds_bucket{le="81.92"} 0
etcd_mvcc_hash_rev_duration_seconds_bucket{le="163.84"} 0
etcd_mvcc_hash_rev_duration_seconds_bucket{le="+Inf"} 0
etcd_mvcc_hash_rev_duration_seconds_sum 0
etcd_mvcc_hash_rev_duration_seconds_count 0
etcd_mvcc_put_total 9.914822e+06
etcd_mvcc_range_total 2.2713633e+07
etcd_mvcc_txn_total 1178
etcd_network_active_peers{Local="aa31e874a0e4743e",Remote="247126a4d20e2d60"} 1
etcd_network_active_peers{Local="aa31e874a0e4743e",Remote="812ea53d2f20f8f8"} 1
etcd_network_client_grpc_received_bytes_total 4.262317765e+09
etcd_network_client_grpc_sent_bytes_total 3.7246420588e+10
etcd_network_peer_received_bytes_total{From="0"} 1.6001142e+08
etcd_network_peer_received_bytes_total{From="247126a4d20e2d60"} 9.185694077e+09
etcd_network_peer_received_bytes_total{From="812ea53d2f20f8f8"} 6.744852652e+09
etcd_network_peer_round_trip_time_seconds_bucket{To="247126a4d20e2d60",le="0.0001"} 0
etcd_network_peer_round_trip_time_seconds_bucket{To="247126a4d20e2d60",le="0.0002"} 1
etcd_network_peer_round_trip_time_seconds_bucket{To="247126a4d20e2d60",le="0.0004"} 1
etcd_network_peer_round_trip_time_seconds_bucket{To="247126a4d20e2d60",le="0.0008"} 1
etcd_network_peer_round_trip_time_seconds_bucket{To="247126a4d20e2d60",le="0.0016"} 72016
etcd_network_peer_round_trip_time_seconds_bucket{To="247126a4d20e2d60",le="0.0032"} 74069
etcd_network_peer_round_trip_time_seconds_bucket{To="247126a4d20e2d60",le="0.0064"} 74080
etcd_network_peer_round_trip_time_seconds_bucket{To="247126a4d20e2d60",le="0.0128"} 74080
etcd_network_peer_round_trip_time_seconds_bucket{To="247126a4d20e2d60",le="0.0256"} 148142
etcd_network_peer_round_trip_time_seconds_bucket{To="247126a4d20e2d60",le="0.0512"} 148158
etcd_network_peer_round_trip_time_seconds_bucket{To="247126a4d20e2d60",le="0.1024"} 148158
etcd_network_peer_round_trip_time_seconds_bucket{To="247126a4d20e2d60",le="0.2048"} 148158
etcd_network_peer_round_trip_time_seconds_bucket{To="247126a4d20e2d60",le="0.4096"} 148158
etcd_network_peer_round_trip_time_seconds_bucket{To="247126a4d20e2d60",le="0.8192"} 148158
etcd_network_peer_round_trip_time_seconds_bucket{To="247126a4d20e2d60",le="1.6384"} 148158
etcd_network_peer_round_trip_time_seconds_bucket{To="247126a4d20e2d60",le="3.2768"} 148158
etcd_network_peer_round_trip_time_seconds_bucket{To="247126a4d20e2d60",le="+Inf"} 148158
etcd_network_peer_round_trip_time_seconds_sum{To="247126a4d20e2d60"} 1658.7367360080052
etcd_network_peer_round_trip_time_seconds_count{To="247126a4d20e2d60"} 148158
etcd_network_peer_round_trip_time_seconds_bucket{To="812ea53d2f20f8f8",le="0.0001"} 0
etcd_network_peer_round_trip_time_seconds_bucket{To="812ea53d2f20f8f8",le="0.0002"} 0
etcd_network_peer_round_trip_time_seconds_bucket{To="812ea53d2f20f8f8",le="0.0004"} 0
etcd_network_peer_round_trip_time_seconds_bucket{To="812ea53d2f20f8f8",le="0.0008"} 0
etcd_network_peer_round_trip_time_seconds_bucket{To="812ea53d2f20f8f8",le="0.0016"} 71870
etcd_network_peer_round_trip_time_seconds_bucket{To="812ea53d2f20f8f8",le="0.0032"} 74074
etcd_network_peer_round_trip_time_seconds_bucket{To="812ea53d2f20f8f8",le="0.0064"} 74080
etcd_network_peer_round_trip_time_seconds_bucket{To="812ea53d2f20f8f8",le="0.0128"} 74080
etcd_network_peer_round_trip_time_seconds_bucket{To="812ea53d2f20f8f8",le="0.0256"} 147948
etcd_network_peer_round_trip_time_seconds_bucket{To="812ea53d2f20f8f8",le="0.0512"} 148158
etcd_network_peer_round_trip_time_seconds_bucket{To="812ea53d2f20f8f8",le="0.1024"} 148158
etcd_network_peer_round_trip_time_seconds_bucket{To="812ea53d2f20f8f8",le="0.2048"} 148158
etcd_network_peer_round_trip_time_seconds_bucket{To="812ea53d2f20f8f8",le="0.4096"} 148158
etcd_network_peer_round_trip_time_seconds_bucket{To="812ea53d2f20f8f8",le="0.8192"} 148158
etcd_network_peer_round_trip_time_seconds_bucket{To="812ea53d2f20f8f8",le="1.6384"} 148158
etcd_network_peer_round_trip_time_seconds_bucket{To="812ea53d2f20f8f8",le="3.2768"} 148158
etcd_network_peer_round_trip_time_seconds_bucket{To="812ea53d2f20f8f8",le="+Inf"} 148158
etcd_network_peer_round_trip_time_seconds_sum{To="812ea53d2f20f8f8"} 1782.569949561006
etcd_network_peer_round_trip_time_seconds_count{To="812ea53d2f20f8f8"} 148158
etcd_network_peer_sent_bytes_total{To="247126a4d20e2d60"} 7.601589685e+09
etcd_network_peer_sent_bytes_total{To="812ea53d2f20f8f8"} 6.366772085e+09
etcd_server_go_version{server_go_version="go1.12.12"} 1
etcd_server_has_leader 1
etcd_server_health_failures 939
etcd_server_health_success 221955
etcd_server_heartbeat_send_failures_total 984
etcd_server_id{server_id="aa31e874a0e4743e"} 1
etcd_server_is_leader 0
etcd_server_is_learner 0
etcd_server_leader_changes_seen_total 31
etcd_server_learner_promote_successes 0
etcd_server_proposals_applied_total 5.8995461e+07
etcd_server_proposals_committed_total 5.8995461e+07
etcd_server_proposals_failed_total 1385
etcd_server_proposals_pending 0
etcd_server_quota_backend_bytes 2.147483648e+09
etcd_server_read_indexes_failed_total 29
etcd_server_slow_apply_total 453291
etcd_server_slow_read_indexes_total 94
etcd_server_snapshot_apply_in_progress_total 0
etcd_server_version{server_version="3.4.3"} 1
etcd_snap_db_fsync_duration_seconds_bucket{le="0.001"} 0
etcd_snap_db_fsync_duration_seconds_bucket{le="0.002"} 0
etcd_snap_db_fsync_duration_seconds_bucket{le="0.004"} 0
etcd_snap_db_fsync_duration_seconds_bucket{le="0.008"} 0
etcd_snap_db_fsync_duration_seconds_bucket{le="0.016"} 0
etcd_snap_db_fsync_duration_seconds_bucket{le="0.032"} 0
etcd_snap_db_fsync_duration_seconds_bucket{le="0.064"} 0
etcd_snap_db_fsync_duration_seconds_bucket{le="0.128"} 0
etcd_snap_db_fsync_duration_seconds_bucket{le="0.256"} 0
etcd_snap_db_fsync_duration_seconds_bucket{le="0.512"} 0
etcd_snap_db_fsync_duration_seconds_bucket{le="1.024"} 0
etcd_snap_db_fsync_duration_seconds_bucket{le="2.048"} 0
etcd_snap_db_fsync_duration_seconds_bucket{le="4.096"} 0
etcd_snap_db_fsync_duration_seconds_bucket{le="8.192"} 0
etcd_snap_db_fsync_duration_seconds_bucket{le="+Inf"} 0
etcd_snap_db_fsync_duration_seconds_sum 0
etcd_snap_db_fsync_duration_seconds_count 0
etcd_snap_db_save_total_duration_seconds_bucket{le="0.1"} 0
etcd_snap_db_save_total_duration_seconds_bucket{le="0.2"} 0
etcd_snap_db_save_total_duration_seconds_bucket{le="0.4"} 0
etcd_snap_db_save_total_duration_seconds_bucket{le="0.8"} 0
etcd_snap_db_save_total_duration_seconds_bucket{le="1.6"} 0
etcd_snap_db_save_total_duration_seconds_bucket{le="3.2"} 0
etcd_snap_db_save_total_duration_seconds_bucket{le="6.4"} 0
etcd_snap_db_save_total_duration_seconds_bucket{le="12.8"} 0
etcd_snap_db_save_total_duration_seconds_bucket{le="25.6"} 0
etcd_snap_db_save_total_duration_seconds_bucket{le="51.2"} 0
etcd_snap_db_save_total_duration_seconds_bucket{le="+Inf"} 0
etcd_snap_db_save_total_duration_seconds_sum 0
etcd_snap_db_save_total_duration_seconds_count 0
etcd_snap_fsync_duration_seconds_bucket{le="0.001"} 0
etcd_snap_fsync_duration_seconds_bucket{le="0.002"} 0
etcd_snap_fsync_duration_seconds_bucket{le="0.004"} 0
etcd_snap_fsync_duration_seconds_bucket{le="0.008"} 93
etcd_snap_fsync_duration_seconds_bucket{le="0.016"} 105
etcd_snap_fsync_duration_seconds_bucket{le="0.032"} 111
etcd_snap_fsync_duration_seconds_bucket{le="0.064"} 120
etcd_snap_fsync_duration_seconds_bucket{le="0.128"} 120
etcd_snap_fsync_duration_seconds_bucket{le="0.256"} 120
etcd_snap_fsync_duration_seconds_bucket{le="0.512"} 120
etcd_snap_fsync_duration_seconds_bucket{le="1.024"} 120
etcd_snap_fsync_duration_seconds_bucket{le="2.048"} 120
etcd_snap_fsync_duration_seconds_bucket{le="4.096"} 120
etcd_snap_fsync_duration_seconds_bucket{le="8.192"} 120
etcd_snap_fsync_duration_seconds_bucket{le="+Inf"} 120
etcd_snap_fsync_duration_seconds_sum 1.1969665500000004
etcd_snap_fsync_duration_seconds_count 120

@superseb
Copy link
Contributor

I don't think this is fast enough. I guess you used the command from https://www.ibm.com/cloud/blog/using-fio-to-tell-whether-your-storage-is-fast-enough-for-etcd, where it states: All you have to do then is look at the output and check if the 99th percentile of fdatasync durations is less than 10ms. If that is the case, then your storage is fast enough.

Your example shows the 99th percentile at 485ms. Also the IOPS shown are even below the minimum that etcd describes on https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/hardware.md#disks.

Before you do anything to the current cluster, I would start with doing some tests on other machines on the VM storage and check if you can get better storage attached to them to be used.

@kombiwagon
Copy link
Author

kombiwagon commented Oct 27, 2020

So, if I well understand, this problems can have a direct impact on other components like: kube-scheduler and kube-controller-manager (frequently leader election) ??

Moreover after I repeated fio test, outcomes were better: for example: write:
IOPS=130,
99.00th=[ 42730]. (usec)
So maybe the reason is elsewhere.

@superseb
Copy link
Contributor

There is more information here: https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#prerequisites

Fluctuating results are also bad, 42630 usec is still not good enough, IOPS seem a bit better. I would advise to improve storage performance to see if that clears up your issues.

@kombiwagon
Copy link
Author

I understand but i am not sure if I it is possible in my situation but I will try.
But is it possible to set some parameters that will make etcd less sensitive to high disk latency? I dont know, for example increase
'--heartbeat-interval' and '--election-timeout'

@superseb
Copy link
Contributor

To be honest, even with the highest values possible (which is only to support high latency setups and not even really to lower IO requirements), this will keep hurting you. As linked before (https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/hardware.md#disks):

Fast disks are the most critical factor for etcd deployment performance and stability.

A slow disk will increase etcd request latency and potentially hurt cluster stability.

and the example configurations: https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/hardware.md#example-hardware-configurations

@kombiwagon
Copy link
Author

kombiwagon commented Oct 29, 2020

Ok i get it. But I'd like to come back to kube-scheduler and kube-controller-manager. Can very frequent leader changes also be caused by etcd ?? thank you very much for the answers.

I have errors in 'calico-kube-controllers' pod:
[ERROR][1] main.go 234: Failed to reach apiserver error=nil
[ERROR][1] client.go 255: Error getting cluster information config ClusterInformation="default" error=Get https://10.43.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded
ERROR][1] main.go 203: Failed to verify datastore error=Get https://10.43.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded

I noticed that when these errors show up my kube-scheduler and kube-controller-manager change leadership.
I did not notice any errors in kube-proxy but i see errors in kube-apiserver for example:
apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}

I have no idea why this is happening. Generally the cluster works, but this is a concern. I would be very grateful for your help.

@superseb
Copy link
Contributor

Pretty sure this is also caused by etcd, both the leader changes (not being able to hold a lock) and the context canceled/context deadline exceeded. As etcd is the most important part, I would optimize this and I'm pretty sure your issues will go away. Let me know if you think there is a bug that we can fix.

@mrolmedo
Copy link

mrolmedo commented May 11, 2022

Hi there,
I had same situation over my cluster: frequent leader changes and restarts of those components. When I checked etcd logs they were full of "failed to send out heartbeat on time". In our particular case, there were network latency due to a scheduled job. Once the job was disable, they were no more leader changes and restarts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants