Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

panic: flag alsologtostderr set at github.com/k3s-io/k3s/pkg/cli/cmds/log.go:78 before being defined #8293

Closed
nekinie opened this issue Sep 4, 2023 · 14 comments

Comments

@nekinie
Copy link

nekinie commented Sep 4, 2023

Environmental Info:
K3s Version:

k3s version v1.27.4+k3s1 (AlpineLinux)
go version go1.21.0

Node(s) CPU architecture, OS, and Version:

Linux ams-k3s1.0az.uk 6.1.51-0-lts #1-Alpine SMP PREEMPT_DYNAMIC Mon, 04 Sep 2023 08:04:05 +0000 x86_64 GNU/Linux

Cluster Configuration:

1 server, 0 agents, default configuration.

Describe the bug:

This change uses the standard flag package to set a couple of values. These flags were never defined, causing the standard flag package to panic.

Steps To Reproduce:

  • install k3s:
    apk add k3s --repository https://dl-cdn.alpinelinux.org/alpine/edge/community
    
  • start k3s:
    PATH="/usr/libexec/cni/:$PATH" k3s server
    

Expected behavior:

Server starts without error.

Actual behavior:

Server panics at startup.

Additional context / logs:

time="2023-09-04T17:58:33Z" level=info msg="Starting k3s v1.27.4+k3s1 (AlpineLinux)"
time="2023-09-04T17:58:33Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
time="2023-09-04T17:58:33Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
time="2023-09-04T17:58:33Z" level=info msg="Database tables and indexes are up to date"
time="2023-09-04T17:58:33Z" level=info msg="Kine available at unix://kine.sock"
time="2023-09-04T17:58:33Z" level=info msg="Reconciling bootstrap data between datastore and disk"
time="2023-09-04T17:58:34Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
time="2023-09-04T17:58:34Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259"
time="2023-09-04T17:58:34Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true"
time="2023-09-04T17:58:34Z" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false"
time="2023-09-04T17:58:34Z" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token"
time="2023-09-04T17:58:34Z" level=info msg="To join server node to cluster: k3s server -s https://45.32.239.244:6443 -t ${SERVER_NODE_TOKEN}"
time="2023-09-04T17:58:34Z" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token"
time="2023-09-04T17:58:34Z" level=info msg="To join agent node to cluster: k3s agent -s https://45.32.239.244:6443 -t ${AGENT_NODE_TOKEN}"
time="2023-09-04T17:58:34Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
time="2023-09-04T17:58:34Z" level=info msg="Run: k3s kubectl"
W0904 17:58:34.018592    9409 feature_gate.go:241] Setting GA feature gate JobTrackingWithFinalizers=true. It will be removed in a future release.
I0904 17:58:34.018927    9409 server.go:567] external host was not specified, using 45.32.239.244
I0904 17:58:34.019491    9409 server.go:174] Version: v1.27.4-k3s1
I0904 17:58:34.019558    9409 server.go:176] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0904 17:58:34.030725    9409 shared_informer.go:311] Waiting for caches to sync for node_authorizer
time="2023-09-04T17:58:34Z" level=info msg="Waiting for API server to become available"
I0904 17:58:34.035683    9409 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0904 17:58:34.035749    9409 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
I0904 17:58:34.046134    9409 handler.go:232] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
W0904 17:58:34.046217    9409 genericapiserver.go:752] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
I0904 17:58:34.046876    9409 instance.go:282] Using reconciler: lease
I0904 17:58:34.392593    9409 handler.go:232] Adding GroupVersion  v1 to ResourceManager
I0904 17:58:34.392975    9409 instance.go:651] API group "internal.apiserver.k8s.io" is not enabled, skipping.
time="2023-09-04T17:58:34Z" level=info msg="Password verified locally for node ams-k3s1.0az.uk"
time="2023-09-04T17:58:34Z" level=info msg="certificate CN=ams-k3s1.0az.uk signed by CN=k3s-server-ca@1693848727: notBefore=2023-09-04 17:32:07 +0000 UTC notAfter=2024-09-03 17:58:34 +0000 UTC"
time="2023-09-04T17:58:35Z" level=info msg="certificate CN=system:node:ams-k3s1.0az.uk,O=system:nodes signed by CN=k3s-client-ca@1693848727: notBefore=2023-09-04 17:32:07 +0000 UTC notAfter=2024-09-03 17:58:35 +0000 UTC"
time="2023-09-04T17:58:35Z" level=info msg="Module overlay was already loaded"
time="2023-09-04T17:58:35Z" level=info msg="Module nf_conntrack was already loaded"
time="2023-09-04T17:58:35Z" level=info msg="Module br_netfilter was already loaded"
time="2023-09-04T17:58:35Z" level=info msg="Module iptable_nat was already loaded"
time="2023-09-04T17:58:35Z" level=info msg="Module iptable_filter was already loaded"
time="2023-09-04T17:58:35Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
panic: flag alsologtostderr set at github.com/k3s-io/k3s/pkg/cli/cmds/log.go:78 before being defined

goroutine 2334 [running]:
flag.(*FlagSet).Var(0xc000122150, {0x5ccebe0, 0x8964529}, {0x43de2d7, 0xf}, {0x4549454, 0x49})
        flag/flag.go:1031 +0x33a
k8s.io/klog/v2.InitFlags.func1(0xc0002943f0?)
        k8s.io/klog/v2@v2.90.1/klog.go:437 +0x31
flag.(*FlagSet).VisitAll(0x89618e0?, 0xc001bddf30)
        flag/flag.go:458 +0x42
k8s.io/klog/v2.InitFlags(0x5d03618?)
        k8s.io/klog/v2@v2.90.1/klog.go:436 +0x45
github.com/k3s-io/k3s/pkg/daemons/executor.(*Embedded).Bootstrap.func1()
        github.com/k3s-io/k3s/pkg/daemons/executor/embed.go:61 +0x55
created by github.com/k3s-io/k3s/pkg/daemons/executor.(*Embedded).Bootstrap in goroutine 1
        github.com/k3s-io/k3s/pkg/daemons/executor/embed.go:54 +0x8f
@brandond
Copy link
Contributor

brandond commented Sep 5, 2023

We don't maintain or support the Alpine package. I can't reproduce this on our builds available here on GitHub - we've been producing builds with that commit that have been working fine for the last 3 years.

I suspect the Alpine package is built differently somehow, and would suggest opening an issue with whoever is maintaining the Alpine package.

@brandond brandond closed this as completed Sep 5, 2023
@willat8
Copy link

willat8 commented Sep 10, 2023

@ddevault
Copy link

The error is fairly straightforward and apparent in the upstream code as well, despite not being reproducible from the GitHub workers (maybe your Go version is out of date -- this check in the Go stdlib was only committed on 2023-03-29, see golang/go 00401835c1d).

Commit 30e050a refactored the code found in this file and, among other changes, switched from the standard library's "flag" package to github.com/urfave/cli. The setupLogging function now uses flag.Set against flags which the stdlib is no longer aware of, causing it to panic.

I think that this code is obsolete and can safely be removed, but I'm no expert on this codebase so I could not say for sure. In any case, you have a bug and you're going to hit it sooner or later once you upgrade your Go toolchain.

@brandond
Copy link
Contributor

I'm not sure what you mean by our go version being out of date; we build K3s with the exact same golang version as upstream Kubernetes uses for any given release. If the Alpine build system is not doing the same, then the build isn't guaranteed to work properly.

@nekopsykose
Copy link

once you update to go 1.21 you will see the same issue

using an older version doesn't mean the issue doesn't exist. you'll just have to fix it later instead of sooner.

@brandond
Copy link
Contributor

brandond commented Sep 11, 2023

If you're building any release of Kubernetes or K3s with golang 1.21 then you're not producing valid builds. You can find the correct version to be used in the upstream build dependencies yaml:
https://github.com/kubernetes/kubernetes/blob/release-1.28/build/dependencies.yaml#L116-L117

This sort of attention to detail is why we generally discourage folks from from rebuilding k3s in their own build pipelines.

@ddevault
Copy link

You've got a bug, brother. A simple one that should be easy to fix, and a bug all the same. The code is wrong. Doesn't matter what version of Go you use, one version over the other just makes the bug more apparent.

@brandond
Copy link
Contributor

brandond commented Sep 12, 2023

Yes, but its a harmless bug - I'm not particularly concerned this specific component references a CLI flag before it's been added to the flagset.

I'm more concerned about Alpine distributing things claiming to be K3s that crash or otherwise misbehave because they're built with the wrong version of golang.

@ddevault
Copy link

It's crashing and misbehaving because there is a bug in your code.

@brandond
Copy link
Contributor

... that is a non-issue and only causes a crash when built with the wrong version of golang. We'll fix that when or before upstream starts using golang 1.21.

@ddevault
Copy link

Great.

@willat8
Copy link

willat8 commented Dec 13, 2023

Hopefully this is now on the table with #9052 coming in?

@brandond
Copy link
Contributor

yes, @galal-hussein is aware of this issue.

@brandond
Copy link
Contributor

brandond commented Dec 14, 2023

Ref: this was broken by golang/go@0040183 in go1.21

flag: panic if a flag is defined after being set

Prior to that, setting a value before the flag was defined WOULD NOT panic. The behavior may be unexpected if you're not aware of the side effects, but we were relying on it.

916980name added a commit to 916980name/k3s that referenced this issue Dec 15, 2023
916980name added a commit to 916980name/k3s that referenced this issue Dec 15, 2023
Signed-off-by: Linshen Lee 1061696872@qq.com
archlinux-github pushed a commit to archlinux/aur that referenced this issue Feb 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

No branches or pull requests

5 participants