Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kube-controller-manager 1.20.5 ttl_controller panic when calling patchNodeWithAnnotation #101045

Closed
s1113950 opened this issue Apr 13, 2021 · 9 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/apps Categorizes an issue or PR as relevant to SIG Apps.

Comments

@s1113950
Copy link

s1113950 commented Apr 13, 2021

What happened:

We (Palantir) are in the process of upgrading our k8s stack from 1.19.5 to 1.20.5, and are noticing panics in the ttl_controller every so often of the kube-controller-manager. We run 3 of them, and alerting is fine and controller-managers are able to start, but eventually they'll fail at a rate of ~100x in a 24 hr period.

Sample logs after ~20 mins of running on a test stack, after the kube-controller-manager pod successfully acquired lease kube-system/kube-controller-manager, and tll_controller tried to patch a node with an annotation:

{"ts":1618271404235.5227,"msg":"Version: v1.20.501\n","v":0}
{"ts":1618271404236.2263,"msg":"Starting serving-cert::/etc/vault-cert-fetch-output/cert.pem::/etc/vault-cert-fetch-output/key.pem\n","v":0}
{"ts":1618271404236.3635,"msg":"loaded serving cert [\"serving-cert::/etc/vault-cert-fetch-output/cert.pem::/etc/vault-cert-fetch-output/key.pem\"]: \"
system:kube-controller-manager\" [serving,client] validServingFor=[10.0.2.112,10.0.177.222] issuer=\"etcdca\" (2021-04-12 23:49:32 +0000 UTC to 2021-04
-16 00:50:02 +0000 UTC (now=2021-04-12 23:50:04.236339277 +0000 UTC))\n","v":2}
{"ts":1618271404236.5293,"msg":"loaded SNI cert [0/\"self-signed loopback\"]: \"apiserver-loopback-client@1618271404\" [serving] validServingFor=[apise
rver-loopback-client] issuer=\"apiserver-loopback-client-ca@1618271404\" (2021-04-12 22:50:03 +0000 UTC to 2022-04-12 22:50:03 +0000 UTC (now=2021-04-1
2 23:50:04.236518161 +0000 UTC))\n","v":2}
{"ts":1618271404236.5532,"msg":"Serving securely on 127.0.0.1:10257\n","v":0}
{"ts":1618271404236.6653,"msg":"Starting DynamicServingCertificateController\n","v":0}
{"ts":1618271404236.8499,"msg":"Serving insecurely on [::]:10252\n","v":0}
{"ts":1618271404236.894,"msg":"attempting to acquire leader lease kube-system/kube-controller-manager...\n","v":0}
{"ts":1618272518571.8079,"msg":"successfully acquired lease kube-system/kube-controller-manager\n","v":0}
{"ts":1618272518571.972,"msg":"Event occurred","v":0,"object":"kube-system/kube-controller-manager","kind":"Lease","apiVersion":"coordination.k8s.io/v1
","type":"Normal","reason":"LeaderElection","message":"ip-10-0-2-112.ec2.internal_4a226bb7-be79-41b1-99a2-a6c314dca8c1 became leader"}
{"ts":1618272518576.171,"msg":"using dynamic client builder\n","v":1}
{"ts":1618272518635.326,"msg":"WARNING: aws built-in cloud provider is now deprecated. The AWS provider is deprecated and will be removed in a future r
elease\n","v":0}
{"ts":1618272518635.4878,"msg":"Building AWS cloudprovider\n","v":0}
{"ts":1618272518635.5405,"msg":"Zone not specified in configuration file; querying AWS metadata service\n","v":0}
{"ts":1618272518845.8572,"msg":"AWS cloud filtering on ClusterID: test-cluster\n","v":0}
{"ts":1618272518845.8953,"msg":"Setting up informers for Cloud\n","v":0}
{"ts":1618272518847.5183,"msg":"Waiting for caches to sync for tokens\n","v":0}
{"ts":1618272518847.6484,"msg":"Starting reflector *v1.Node (18h10m1.451747007s) from k8s.io/client-go/informers/factory.go:134\n","v":2}
{"ts":1618272518847.716,"msg":"Starting reflector *v1.ServiceAccount (18h10m1.451747007s) from k8s.io/client-go/informers/factory.go:134\n","v":2}
{"ts":1618272518847.7546,"msg":"Starting reflector *v1.Secret (18h10m1.451747007s) from k8s.io/client-go/informers/factory.go:134\n","v":2}
{"ts":1618272518857.548,"msg":"Starting \"namespace\"\n","v":1}
{"ts":1618272518891.2852,"msg":"Started \"namespace\"\n","v":0}
{"ts":1618272518891.3125,"msg":"Starting \"serviceaccount\"\n","v":1}
{"ts":1618272518891.3555,"msg":"Starting namespace controller\n","v":0}
{"ts":1618272518891.3782,"msg":"Waiting for caches to sync for namespace\n","v":0}
{"ts":1618272518894.8984,"msg":"Started \"serviceaccount\"\n","v":0}
{"ts":1618272518894.9207,"msg":"Starting \"csrsigning\"\n","v":1}
{"ts":1618272518894.932,"msg":"skipping CSR signer controller because no csr cert/key was specified\n","v":2}
{"ts":1618272518894.937,"msg":"Skipping \"csrsigning\"\n","v":0}
{"ts":1618272518894.9412,"msg":"\"bootstrapsigner\" is disabled\n","v":0}
{"ts":1618272518894.945,"msg":"\"tokencleaner\" is disabled\n","v":0}
{"ts":1618272518894.9492,"msg":"Starting \"nodeipam\"\n","v":1}
{"ts":1618272518894.954,"msg":"Skipping \"nodeipam\"\n","v":0}
{"ts":1618272518894.9583,"msg":"Starting \"ephemeral-volume\"\n","v":1}
{"ts":1618272518894.9653,"msg":"Skipping \"ephemeral-volume\"\n","v":0}
{"ts":1618272518894.979,"msg":"Starting \"endpointslice\"\n","v":1}
{"ts":1618272518895.004,"msg":"Starting service account controller\n","v":0}
{"ts":1618272518895.0242,"msg":"Waiting for caches to sync for service account\n","v":0}
{"ts":1618272518899.6924,"msg":"Started \"endpointslice\"\n","v":0}
{"ts":1618272518899.7122,"msg":"Starting \"resourcequota\"\n","v":1}
{"ts":1618272518899.8171,"msg":"Starting endpoint slice controller\n","v":0}
{"ts":1618272518899.831,"msg":"Waiting for caches to sync for endpoint_slice\n","v":0}
{"ts":1618272518929.125,"msg":"QuotaMonitor created object count evaluator for serviceaccounts\n","v":0}
...
{"ts":1618272518930.9832,"msg":"Started \"resourcequota\"\n","v":0}
{"ts":1618272518930.9937,"msg":"Starting \"deployment\"\n","v":1}
{"ts":1618272518931.0142,"msg":"Starting resource quota controller\n","v":0}
{"ts":1618272518931.0454,"msg":"Waiting for caches to sync for resource quota\n","v":0}
{"ts":1618272518931.0708,"msg":"QuotaMonitor running\n","v":0}
{"ts":1618272518935.4028,"msg":"Started \"deployment\"\n","v":0}
{"ts":1618272518935.424,"msg":"Starting \"ttl-after-finished\"\n","v":1}
{"ts":1618272518935.4307,"msg":"Skipping \"ttl-after-finished\"\n","v":0}
{"ts":1618272518935.4353,"msg":"Starting \"root-ca-cert-publisher\"\n","v":1}
{"ts":1618272518935.467,"msg":"Starting deployment controller\n","v":0}
{"ts":1618272518935.4978,"msg":"Waiting for caches to sync for deployment\n","v":0}
{"ts":1618272518938.9702,"msg":"Started \"root-ca-cert-publisher\"\n","v":0}
{"ts":1618272518938.9875,"msg":"Starting \"pvc-protection\"\n","v":1}
{"ts":1618272518939.0833,"msg":"Starting root CA certificate configmap publisher\n","v":0}
{"ts":1618272518939.0967,"msg":"Waiting for caches to sync for crt configmap\n","v":0}
{"ts":1618272518943.5696,"msg":"Started \"pvc-protection\"\n","v":0}
{"ts":1618272518943.5938,"msg":"Starting \"replicationcontroller\"\n","v":1}
{"ts":1618272518943.6116,"msg":"Starting PVC protection controller\n","v":0}
{"ts":1618272518943.6257,"msg":"Waiting for caches to sync for PVC protection\n","v":0}
{"ts":1618272518947.6274,"msg":"Caches are synced for tokens \n","v":0}
{"ts":1618272518947.7432,"msg":"Started \"replicationcontroller\"\n","v":0}
{"ts":1618272518947.765,"msg":"Starting \"replicaset\"\n","v":1}
{"ts":1618272518947.8618,"msg":"Starting replicationcontroller controller\n","v":0}
{"ts":1618272518947.8818,"msg":"Waiting for caches to sync for ReplicationController\n","v":0}
{"ts":1618272518951.6248,"msg":"Started \"replicaset\"\n","v":0}
{"ts":1618272518951.6506,"msg":"Starting \"horizontalpodautoscaling\"\n","v":1}
{"ts":1618272518951.759,"msg":"Starting replicaset controller\n","v":0}
{"ts":1618272518952.0635,"msg":"Waiting for caches to sync for ReplicaSet\n","v":0}
{"ts":1618272518966.9988,"msg":"Started \"horizontalpodautoscaling\"\n","v":0}
{"ts":1618272518967.0261,"msg":"Starting \"csrcleaner\"\n","v":1}
{"ts":1618272518967.0332,"msg":"Starting HPA controller\n","v":0}
{"ts":1618272518967.061,"msg":"Waiting for caches to sync for HPA\n","v":0}
{"ts":1618272518970.8284,"msg":"Started \"csrcleaner\"\n","v":0}
{"ts":1618272518970.847,"msg":"Starting \"ttl\"\n","v":1}
{"ts":1618272518970.9106,"msg":"Starting CSR cleaner controller\n","v":0}
{"ts":1618272518974.0059,"msg":"Started \"ttl\"\n","v":0}
{"ts":1618272518974.026,"msg":"Starting \"nodelifecycle\"\n","v":1}
{"ts":1618272518974.119,"msg":"Starting TTL controller\n","v":0}
{"ts":1618272518974.1313,"msg":"Waiting for caches to sync for TTL\n","v":0}
{"ts":1618272518974.139,"msg":"Caches are synced for TTL \n","v":0}
{"ts":1618272518978.5352,"msg":"Sending events to api server.\n","v":0}
{"ts":1618272518978.7031,"msg":"Sending events to api server.\n","v":0}
{"ts":1618272518978.8193,"msg":"Controller will reconcile labels.\n","v":0}
{"ts":1618272518978.8745,"msg":"Started \"nodelifecycle\"\n","v":0}
{"ts":1618272518978.8848,"msg":"Starting \"persistentvolume-binder\"\n","v":1}
{"ts":1618272518978.9602,"msg":"Starting node controller\n","v":0}
{"ts":1618272518978.9873,"msg":"Waiting for caches to sync for taint\n","v":0}
{"ts":1618272518982.2708,"msg":"Loaded volume plugin \"kubernetes.io/host-path\"\n","v":1}
{"ts":1618272518982.2866,"msg":"Loaded volume plugin \"kubernetes.io/nfs\"\n","v":1}
{"ts":1618272518982.2986,"msg":"Loaded volume plugin \"kubernetes.io/glusterfs\"\n","v":1}
{"ts":1618272518982.3076,"msg":"Loaded volume plugin \"kubernetes.io/rbd\"\n","v":1}
{"ts":1618272518982.3164,"msg":"Loaded volume plugin \"kubernetes.io/quobyte\"\n","v":1}
{"ts":1618272518982.3228,"msg":"Loaded volume plugin \"kubernetes.io/aws-ebs\"\n","v":1}
{"ts":1618272518982.3296,"msg":"Loaded volume plugin \"kubernetes.io/gce-pd\"\n","v":1}
{"ts":1618272518982.336,"msg":"Loaded volume plugin \"kubernetes.io/cinder\"\n","v":1}
{"ts":1618272518982.3435,"msg":"Loaded volume plugin \"kubernetes.io/azure-disk\"\n","v":1}
{"ts":1618272518982.352,"msg":"Loaded volume plugin \"kubernetes.io/azure-file\"\n","v":1}
{"ts":1618272518982.3586,"msg":"Loaded volume plugin \"kubernetes.io/vsphere-volume\"\n","v":1}
{"ts":1618272518982.367,"msg":"Loaded volume plugin \"kubernetes.io/flocker\"\n","v":1}
{"ts":1618272518982.38,"msg":"Loaded volume plugin \"kubernetes.io/portworx-volume\"\n","v":1}
{"ts":1618272518982.3933,"msg":"Loaded volume plugin \"kubernetes.io/scaleio\"\n","v":1}
{"ts":1618272518982.4026,"msg":"Loaded volume plugin \"kubernetes.io/local-volume\"\n","v":1}
{"ts":1618272518982.4136,"msg":"Loaded volume plugin \"kubernetes.io/storageos\"\n","v":1}
{"ts":1618272518982.436,"msg":"Loaded volume plugin \"kubernetes.io/csi\"\n","v":1}
{"ts":1618272518982.485,"msg":"Started \"persistentvolume-binder\"\n","v":0}
{"ts":1618272518982.495,"msg":"Starting \"endpoint\"\n","v":1}
{"ts":1618272518982.5056,"msg":"Starting persistent volume controller\n","v":0}
{"ts":1618272518982.5256,"msg":"Waiting for caches to sync for persistent volume\n","v":0}
{"ts":1618272518986.4138,"msg":"Started \"endpoint\"\n","v":0}
{"ts":1618272518986.4355,"msg":"Starting \"statefulset\"\n","v":1}
{"ts":1618272518986.4956,"msg":"Starting endpoint controller\n","v":0}
{"ts":1618272518986.5232,"msg":"Waiting for caches to sync for endpoint\n","v":0}
{"ts":1618272518989.6577,"msg":"Started \"statefulset\"\n","v":0}
{"ts":1618272518989.6736,"msg":"Starting \"cronjob\"\n","v":1}
{"ts":1618272518989.7573,"msg":"Starting stateful set controller\n","v":0}
{"ts":1618272518989.7761,"msg":"Waiting for caches to sync for stateful set\n","v":0}
{"ts":1618272518994.16,"msg":"Started \"cronjob\"\n","v":0}
{"ts":1618272518994.1763,"msg":"Starting \"route\"\n","v":1}
{"ts":1618272518994.1848,"msg":"Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: false.\n","v":0}
{"ts":1618272518994.19,"msg":"Skipping \"route\"\n","v":0}
{"ts":1618272518994.1943,"msg":"Starting \"pv-protection\"\n","v":1}
{"ts":1618272518994.2363,"msg":"Starting CronJob Manager\n","v":0}
{"ts":1618272518996.7854,"msg":"Observed a panic: \"invalid memory address or nil pointer dereference\" (runtime error: invalid memory address or nil p
ointer dereference)\ngoroutine 2210 [running]:\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x3f2eec0, 0x6f5d020)
	/go/src
/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89\npanic(0x3f2eec0, 0x6f5d020)
	/usr/local/go/src/runtime/panic.go:969 +0x1b9\nk8s.io/kubernetes/vendor/go.uber.org/zap/zapcore.(*jsonEncoder).AppendDuration(0xc001832c90, 0x0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.uber.org/zap/zapcore/json_encoder.go:225 +0x53\nk8s.io/kubernetes/vendor/go.uber.org/zap/zapcore.(*jsonEncoder).AddDuration(0xc001832c90, 0x47abe72, 0x7, 0x0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.uber.org/zap/zapcore/json_encoder.go:123 +0x57\nk8s.io/kubernetes/vendor/go.uber.org/zap/zapcore.Field.AddTo(0x47abe72, 0x7, 0x8, 0x0, 0x0, 0x0, 0x0, 0x0, 0x4ec5a00, 0xc001832c90)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.uber.org/zap/zapcore/field.go:126 +0x4d6\nk8s.io/kubernetes/vendor/go.uber.org/zap/zapcore.addFields(0x4ec5a00, 0xc001832c90, 0xc0002ac480, 0x3, 0x3)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.uber.org/zap/zapcore/field.go:199 +0xcf\nk8s.io/kubernetes/vendor/go.uber.org/zap/zapcore.(*jsonEncoder).EncodeEntry(0xc00011cd80, 0x0, 0xc0155621bb67e89d, 0x103a2253034, 0x6f983c0, 0x0, 0x0, 0x47db5e3, 0x16, 0x0, ...)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.uber.org/zap/zapcore/json_encoder.go:364 +0x1f2\nk8s.io/kubernetes/vendor/go.uber.org/zap/zapcore.(*ioCore).Write(0xc00011cdb0, 0x0, 0xc0155621bb67e89d, 0x103a2253034, 0x6f983c0, 0x0, 0x0, 0x47db5e3, 0x16, 0x0, ...)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.uber.org/zap/zapcore/core.go:86 +0xa9\nk8s.io/kubernetes/vendor/go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc000b653f0, 0xc0002ac480, 0x3, 0x3)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.uber.org/zap/zapcore/entry.go:215 +0x12d\nk8s.io/kubernetes/vendor/k8s.io/component-base/logs/json.(*zapLogger).Info(0xc000e529f0, 0x47db5e3, 0x16, 0xc000ea7480, 0x4, 0x4)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/json/json.go:61 +0x194\nk8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).infoS(0x6f98860, 0x4e9b540, 0xc000e529f0, 0x0, 0x0, 0x47db5e3, 0x16, 0xc000ea7480, 0x4, 0x4)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:792 +0x8d\nk8s.io/kubernetes/vendor/k8s.io/klog/v2.Verbose.InfoS(...)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1362\nk8s.io/kubernetes/pkg/controller/ttl.(*Controller).patchNodeWithAnnotation(0xc000251ce0, 0xc0008d0300, 0x47fd924, 0x1c, 0x0, 0x0, 0xc000f99da0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/ttl/ttl_controller.go:276 +0x732\nk8s.io/kubernetes/pkg/controller/ttl.(*Controller).updateNodeIfNeeded(0xc000251ce0, 0xc00111ff00, 0x1a, 0x0, 0x0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/ttl/ttl_controller.go:295 +0x16d\nk8s.io/kubernetes/pkg/controller/ttl.(*Controller).processItem(0xc000251ce0, 0x203000)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/ttl/ttl_controller.go:216 +0xcd\nk8s.io/kubernetes/pkg/controller/ttl.(*Controller).worker(0xc000251ce0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/ttl/ttl_controller.go:205 +0x2b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc001171900)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001171900, 0x4df84c0, 0xc000a9c300, 0x1, 0xc000b5f080)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001171900, 0x3b9aca00, 0x0, 0x1, 0xc000b5f080)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc001171900, 0x3b9aca00, 0xc000b5f080)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d\ncreated by k8s.io/kubernetes/pkg/controller/ttl.(*Controller).Run
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/ttl/ttl_controller.go:129 +0x1fd\n","v":0}

What you expected to happen:

kube-controller-manager with no panics

How to reproduce it (as minimally and precisely as possible):

We run hyperkube-based containers as static pods on AWS with the following flags on the controller-manager:

    - --bind-address=127.0.0.1
    - --port=10252
    - --tls-cert-file=/etc/vault-cert-fetch-output/cert.pem
    - --tls-private-key-file=/etc/vault-cert-fetch-output/key.pem
    - --feature-gates=ExpandPersistentVolumes=true
    - --cloud-provider=aws
    - --cluster-name=k8s
    - --configure-cloud-routes=false
    - --kubeconfig=/etc/kube/conf/kubeconfig
    - --use-service-account-credentials
    - --master=https://127.0.0.1:6443
    - --root-ca-file=/etc/vault-cert-fetch-output/ca.pem
    - --service-cluster-ip-range=10.100.0.0/16
    - --service-account-private-key-file=/etc/vault-cert-fetch-output/service-account.pem
    - --concurrent-serviceaccount-token-syncs=15
    - --kube-api-burst=60
    - --kube-api-qps=30
    - --profiling=false
    - --terminated-pod-gc-threshold=12500
    - --logging-format=json
    - --v=2

Anything else we need to know?:

We have our own fork of (mostly) vanilla k8s, and didn't experience this prior to upgrading to k8s 1.20.5. Th only difference between this version and 1.19 is we added the "--logging-format=json" flag.

Environment:

  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.501", GitCommit:"c279c2a41b0a5127d3fc5099287e18f9a3646125", GitTreeState:"clean", BuildDate:"2021-03-25T20:48:27Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.501", GitCommit:"c279c2a41b0a5127d3fc5099287e18f9a3646125", GitTreeState:"clean", BuildDate:"2021-03-25T20:45:27Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration: AWS
  • OS (e.g: cat /etc/os-release):
NAME="Ubuntu"
VERSION="18.04.5 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.5 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
  • Kernel (e.g. uname -a): Linux ip-10-0-2-51.ec2.internal 4.15.0-1097-aws #104-Ubuntu SMP Fri Mar 19 18:19:00 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools: running kube-apiserver, kube-controller-manager, and kube-scheduler as static pods we configure ourselves
  • Network plugin and version (if this is a network-related bug):
  • Others:
@s1113950 s1113950 added the kind/bug Categorizes issue or PR as related to a bug. label Apr 13, 2021
@k8s-ci-robot k8s-ci-robot added needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Apr 13, 2021
@k8s-ci-robot
Copy link
Contributor

@s1113950: This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@s1113950
Copy link
Author

/sig api-machinery

@k8s-ci-robot k8s-ci-robot added sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Apr 13, 2021
@pacoxu
Copy link
Member

pacoxu commented Apr 13, 2021

The same error is fixed in #100013 on 1.21.
@rphillips @serathius it seems that we should backport the fix to 1.20?

@s1113950
Copy link
Author

Can confirm that after I applied the patch from #100013 the panics went away

@fedebongio
Copy link
Contributor

ttl_controller belongs to SIG Apps
/remove-sig api-machinery
/sig apps
/cc @janetkuo

@k8s-ci-robot k8s-ci-robot added sig/apps Categorizes an issue or PR as relevant to SIG Apps. and removed sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. labels Apr 15, 2021
@serathius
Copy link
Contributor

/assign

@serathius
Copy link
Contributor

serathius commented May 7, 2021

Fix was backported to both 1.20 and 1.19. They should be available in the next minor releases 1.20.7 and 1.19.11. Those releases are planned for the next week 2021-05-12 https://github.com/kubernetes/sig-release/blob/master/releases/patch-releases.md#detailed-release-history-for-active-branches

@serathius
Copy link
Contributor

/close

@k8s-ci-robot
Copy link
Contributor

@serathius: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/apps Categorizes an issue or PR as relevant to SIG Apps.
Projects
None yet
Development

No branches or pull requests

5 participants