Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[QUESTION] Fresh install of k3s on Fedora 33 with latest docker fails to create clusters. #417

Closed
birdiesanders opened this issue Dec 2, 2020 · 9 comments
Assignees
Labels
not a bug Luckily this is not a bug with k3d after all ¯\_(ツ)_/¯

Comments

@birdiesanders
Copy link

What did you do

  • How was the cluster created?

    • k3d cluster create
  • What did you do afterwards?

    • Tried to reinstall

What did you expect to happen

A Cluster to be presented to me.

Which OS & Architecture

  • Linux

Which version of k3d

  • k3s version v1.19.3-k3s3 (default)

Which version of docker

  • output of docker version and docker info
Client: Docker Engine - Community
 Version:           20.10.0-rc1
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        5cc2396
 Built:             Tue Nov 17 22:52:47 2020
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.0-rc1
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       131bf7e
  Built:            Tue Nov 17 22:50:28 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.1
  GitCommit:        c623d1b36f09f8ef6536a057bd658b3aa8632828
 runc:
  Version:          1.0.0-rc92
  GitCommit:        ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc., v0.4.2-docker)

Server:
 Containers: 2
  Running: 0
  Paused: 0
  Stopped: 2
 Images: 3
 Server Version: 20.10.0-rc1
 Storage Driver: overlay2
  Backing Filesystem: btrfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: c623d1b36f09f8ef6536a057bd658b3aa8632828
 runc version: ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
  cgroupns
 Kernel Version: 5.9.11-200.fc33.x86_64
 Operating System: Fedora 33 (KDE Plasma)
 OSType: linux
 Architecture: x86_64
 CPUs: 16
 Total Memory: 31.33GiB
 Name: cray
 ID: IHCN:T6MJ:D2DO:I23H:EAG7:TPRB:AZIM:UCN3:OSSW:6UBW:BGQ5:43YZ
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No kernel memory TCP limit support
WARNING: No oom kill disable support
WARNING: Support for cgroup v2 is experimental
@birdiesanders birdiesanders added the bug Something isn't working label Dec 2, 2020
@birdiesanders
Copy link
Author

[dfogle@cray ~]$ k3d cluster create
INFO[0000] Created network 'k3d-k3s-default'            
INFO[0000] Created volume 'k3d-k3s-default-images'      
INFO[0001] Creating node 'k3d-k3s-default-server-0'     
ERRO[0003] Failed waiting for log message 'k3s is up and running' from node 'k3d-k3s-default-server-0' 
ERRO[0003] Server node 'k3d-k3s-default-server-0' failed to get ready: Node 'k3d-k3s-default-server-0' (container 'a650392171e033b695a52f5f7f9be5085ceb556c49092988e41925d6369a0aa0') not running 
ERRO[0003] Failed to create cluster >>> Rolling Back    
INFO[0003] Deleting cluster 'k3s-default'               
INFO[0003] Deleted k3d-k3s-default-server-0             
INFO[0003] Deleting cluster network 'fd80dc2cdb7367c319c84b30e05d78425ecdfec48e8a3e649c3e4bceb79e2d73' 
FATA[0003] Cluster creation FAILED, all changes have been rolled back!

@birdiesanders
Copy link
Author

birdiesanders commented Dec 2, 2020

 [dfogle@cray ~]$ k3d cluster create  dev --trace
DEBU[0000] Selected runtime is 'docker.Docker'          
DEBU[0000] API-Port Mapping didn't specify hostPort, choosing one randomly... 
DEBU[0000] Got free port for API: '36901'               
TRAC[0000] PortFilterMap: map[]                         
TRAC[0000] LabelFilterMap: map[]                        
TRAC[0000] EnvFilterMap: map[]                          
DEBU[0000] '--update-default-kubeconfig set: enabling wait-for-server 
INFO[0000] Created network 'k3d-dev'                    
INFO[0000] Created volume 'k3d-dev-images'              
INFO[0001] Creating node 'k3d-dev-server-0'             
TRAC[0001] Creating node from spec
&{Name:k3d-dev-server-0 Role:server Image:docker.io/rancher/k3s:v1.19.3-k3s3 Volumes:[k3d-dev-images:/k3d/images] Env:[K3S_TOKEN=WkGVbqtHATmhItTkILCM] Cmd:[] Args:[] Ports:[] Restart:false Labels:map[k3d.cluster:dev k3d.cluster.imageVolume:k3d-dev-images k3d.cluster.network:ec8f2e6b999273cb4c03a2e3ae8a8b2705caa3cb94d2971c9043e1d524c6440c k3d.cluster.network.external:false k3d.cluster.token:WkGVbqtHATmhItTkILCM k3d.cluster.url:https://k3d-dev-server-0:6443] Network:ec8f2e6b999273cb4c03a2e3ae8a8b2705caa3cb94d2971c9043e1d524c6440c ExtraHosts:[] ServerOpts:{IsInit:false ExposeAPI:{Host:0.0.0.0 HostIP:0.0.0.0 Port:36901}} AgentOpts:{} GPURequest: State:{Running:false Status:}} 
TRAC[0001] Creating docker container with translated config
&{ContainerConfig:{Hostname:k3d-dev-server-0 Domainname: User: AttachStdin:false AttachStdout:false AttachStderr:false ExposedPorts:map[] Tty:false OpenStdin:false StdinOnce:false Env:[K3S_TOKEN=WkGVbqtHATmhItTkILCM K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml] Cmd:[server --tls-san 0.0.0.0] Healthcheck:<nil> ArgsEscaped:false Image:docker.io/rancher/k3s:v1.19.3-k3s3 Volumes:map[] WorkingDir: Entrypoint:[] NetworkDisabled:false MacAddress: OnBuild:[] Labels:map[app:k3d k3d.cluster:dev k3d.cluster.imageVolume:k3d-dev-images k3d.cluster.network:ec8f2e6b999273cb4c03a2e3ae8a8b2705caa3cb94d2971c9043e1d524c6440c k3d.cluster.network.external:false k3d.cluster.token:WkGVbqtHATmhItTkILCM k3d.cluster.url:https://k3d-dev-server-0:6443 k3d.role:server k3d.server.api.host:0.0.0.0 k3d.server.api.hostIP:0.0.0.0 k3d.server.api.port:36901] StopSignal: StopTimeout:<nil> Shell:[]} HostConfig:{Binds:[k3d-dev-images:/k3d/images] ContainerIDFile: LogConfig:{Type: Config:map[]} NetworkMode: PortBindings:map[] RestartPolicy:{Name: MaximumRetryCount:0} AutoRemove:false VolumeDriver: VolumesFrom:[] CapAdd:[] CapDrop:[] Capabilities:[] CgroupnsMode: DNS:[] DNSOptions:[] DNSSearch:[] ExtraHosts:[] GroupAdd:[] IpcMode: Cgroup: Links:[] OomScoreAdj:0 PidMode: Privileged:true PublishAllPorts:false ReadonlyRootfs:false SecurityOpt:[] StorageOpt:map[] Tmpfs:map[/run: /var/run:] UTSMode: UsernsMode: ShmSize:0 Sysctls:map[] Runtime: ConsoleSize:[0 0] Isolation: Resources:{CPUShares:0 Memory:0 NanoCPUs:0 CgroupParent: BlkioWeight:0 BlkioWeightDevice:[] BlkioDeviceReadBps:[] BlkioDeviceWriteBps:[] BlkioDeviceReadIOps:[] BlkioDeviceWriteIOps:[] CPUPeriod:0 CPUQuota:0 CPURealtimePeriod:0 CPURealtimeRuntime:0 CpusetCpus: CpusetMems: Devices:[] DeviceCgroupRules:[] DeviceRequests:[] KernelMemory:0 KernelMemoryTCP:0 MemoryReservation:0 MemorySwap:0 MemorySwappiness:<nil> OomKillDisable:<nil> PidsLimit:<nil> Ulimits:[] CPUCount:0 CPUPercent:0 IOMaximumIOps:0 IOMaximumBandwidth:0} Mounts:[] MaskedPaths:[] ReadonlyPaths:[] Init:0xc00027ada0} NetworkingConfig:{EndpointsConfig:map[ec8f2e6b999273cb4c03a2e3ae8a8b2705caa3cb94d2971c9043e1d524c6440c:0xc0001b69c0]}} 
DEBU[0001] Created container k3d-dev-server-0 (ID: 94aff9c728b2f192bfc3c6374309b905cbcb872d7076889c1beb1d872871307e) 
DEBU[0001] Created node 'k3d-dev-server-0'              
DEBU[0001] Waiting for server node 'k3d-dev-server-0' to get ready 
ERRO[0003] Failed waiting for log message 'k3s is up and running' from node 'k3d-dev-server-0' 
ERRO[0003] Server node 'k3d-dev-server-0' failed to get ready: Node 'k3d-dev-server-0' (container '94aff9c728b2f192bfc3c6374309b905cbcb872d7076889c1beb1d872871307e') not running 
ERRO[0003] Failed to create cluster >>> Rolling Back    
INFO[0003] Deleting cluster 'dev'                       
DEBU[0003] Cluster Details: &{Name:dev Network:{Name:ec8f2e6b999273cb4c03a2e3ae8a8b2705caa3cb94d2971c9043e1d524c6440c External:false} Token:WkGVbqtHATmhItTkILCM Nodes:[0xc0000e4640] InitNode:<nil> ExternalDatastore:{Endpoint: CAFile: CertFile: KeyFile: Network:} CreateClusterOpts:0xc00007f8c0 ExposeAPI:{Host:0.0.0.0 HostIP:0.0.0.0 Port:36901} ServerLoadBalancer:0xc0000e4500 ImageVolume:} 
INFO[0003] Deleted k3d-dev-server-0                     
INFO[0003] Deleting cluster network 'ec8f2e6b999273cb4c03a2e3ae8a8b2705caa3cb94d2971c9043e1d524c6440c' 
FATA[0003] Cluster creation FAILED, all changes have been rolled back!

@iwilltry42 iwilltry42 self-assigned this Dec 3, 2020
@iwilltry42
Copy link
Member

Hi @birdiesanders , thanks for opening this issue!
Nothing suspicious in the k3d logs, so let's check the k3s logs 👍
Can you please create a cluster and prevent rollback via k3d cluster create dev --no-rollback and then check the logs of the server container via docker logs k3d-dev-server-0?
There are probably some error messages and a fatal message near the end of the log stream.
Would be cool if you could post this output here for further debugging.

@iwilltry42 iwilltry42 added this to the v3.3.1 milestone Dec 3, 2020
@birdiesanders
Copy link
Author

[birdie@dawanda]: ~/civo-work/operators/civo-stack-operator>$ docker logs k3d-dev-server-0
time="2020-12-03T16:07:44.366499784Z" level=info msg="Starting k3s v1.18.9+k3s1 (630bebf9)"
time="2020-12-03T16:07:45.057953757Z" level=info msg="Kine listening on unix://kine.sock"
time="2020-12-03T16:07:45.245207943Z" level=info msg="Active TLS secret  (ver=) (count 8): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.19.0.2:172.19.0.2 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/hash:df2ece2e551e24fb8e81154f0d8023b5da59d8358726e78ba0a117279ccdcada]"
time="2020-12-03T16:07:45.254891513Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Flag --basic-auth-file has been deprecated, Basic authentication mode is deprecated and will be removed in a future release. It is not recommended for production environments.
I1203 16:07:45.255550       6 server.go:645] external host was not specified, using 172.19.0.2
I1203 16:07:45.256375       6 server.go:162] Version: v1.18.9+k3s1
I1203 16:07:45.946490       6 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1203 16:07:45.946519       6 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1203 16:07:45.948115       6 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1203 16:07:45.948135       6 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1203 16:07:45.991842       6 master.go:270] Using reconciler: lease
I1203 16:07:46.033364       6 rest.go:113] the default service ipfamily for this cluster is: IPv4
W1203 16:07:46.416905       6 genericapiserver.go:409] Skipping API batch/v2alpha1 because it has no resources.
W1203 16:07:46.426747       6 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W1203 16:07:46.439741       6 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W1203 16:07:46.459312       6 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1203 16:07:46.463591       6 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1203 16:07:46.481965       6 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1203 16:07:46.515192       6 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources.
W1203 16:07:46.515229       6 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources.
I1203 16:07:46.525940       6 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1203 16:07:46.525963       6 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1203 16:07:48.898630       6 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
I1203 16:07:48.898641       6 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
I1203 16:07:48.899008       6 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key
I1203 16:07:48.899657       6 secure_serving.go:178] Serving securely on 127.0.0.1:6444
I1203 16:07:48.899740       6 tlsconfig.go:240] Starting DynamicServingCertificateController
I1203 16:07:48.899776       6 crd_finalizer.go:266] Starting CRDFinalizer
I1203 16:07:48.899780       6 available_controller.go:387] Starting AvailableConditionController
I1203 16:07:48.899791       6 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I1203 16:07:48.899816       6 controller.go:86] Starting OpenAPI controller
I1203 16:07:48.899837       6 customresource_discovery_controller.go:209] Starting DiscoveryController
I1203 16:07:48.899852       6 naming_controller.go:291] Starting NamingConditionController
I1203 16:07:48.899869       6 establishing_controller.go:76] Starting EstablishingController
I1203 16:07:48.899858       6 apiservice_controller.go:94] Starting APIServiceRegistrationController
I1203 16:07:48.899891       6 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I1203 16:07:48.899894       6 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1203 16:07:48.899917       6 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1203 16:07:48.899996       6 autoregister_controller.go:141] Starting autoregister controller
I1203 16:07:48.900017       6 cache.go:32] Waiting for caches to sync for autoregister controller
I1203 16:07:48.900053       6 crdregistration_controller.go:111] Starting crd-autoregister controller
I1203 16:07:48.900072       6 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
I1203 16:07:48.900130       6 controller.go:81] Starting OpenAPI AggregationController
I1203 16:07:48.900387       6 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I1203 16:07:48.900401       6 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller
I1203 16:07:48.900447       6 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
I1203 16:07:48.900486       6 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
I1203 16:07:49.000029       6 cache.go:39] Caches are synced for AvailableConditionController controller
I1203 16:07:49.000050       6 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1203 16:07:49.000154       6 cache.go:39] Caches are synced for autoregister controller
I1203 16:07:49.000221       6 shared_informer.go:230] Caches are synced for crd-autoregister 
I1203 16:07:49.000550       6 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
E1203 16:07:49.038383       6 controller.go:151] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.43.0.1": cannot allocate resources of type serviceipallocations at this time
E1203 16:07:49.042789       6 controller.go:156] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.19.0.2, ResourceVersion: 0, AdditionalErrorMsg: 
I1203 16:07:49.898479       6 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1203 16:07:49.898646       6 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1203 16:07:49.904376       6 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
I1203 16:07:49.908524       6 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
I1203 16:07:49.908544       6 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I1203 16:07:50.253918       6 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1203 16:07:50.291315       6 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W1203 16:07:50.375684       6 lease.go:224] Resetting endpoints for master service "kubernetes" to [172.19.0.2]
I1203 16:07:50.376711       6 controller.go:606] quota admission added evaluator for: endpoints
I1203 16:07:50.380074       6 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1203 16:07:50.915859       6 registry.go:150] Registering EvenPodsSpread predicate and priority function
I1203 16:07:50.915902       6 registry.go:150] Registering EvenPodsSpread predicate and priority function
time="2020-12-03T16:07:50.917559187Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0"
time="2020-12-03T16:07:50.918727861Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
time="2020-12-03T16:07:50.930512167Z" level=info msg="Waiting for cloudcontroller rbac role to be created"
I1203 16:07:50.934106       6 controllermanager.go:161] Version: v1.18.9+k3s1
I1203 16:07:50.935758       6 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
time="2020-12-03T16:07:50.941636570Z" level=info msg="Creating CRD addons.k3s.cattle.io"
I1203 16:07:50.945623       6 registry.go:150] Registering EvenPodsSpread predicate and priority function
I1203 16:07:50.945665       6 registry.go:150] Registering EvenPodsSpread predicate and priority function
W1203 16:07:50.950384       6 authorization.go:47] Authorization is disabled
W1203 16:07:50.950439       6 authentication.go:40] Authentication is disabled
I1203 16:07:50.950470       6 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
time="2020-12-03T16:07:50.953923630Z" level=info msg="Creating CRD helmcharts.helm.cattle.io"
time="2020-12-03T16:07:50.964374325Z" level=info msg="Waiting for CRD addons.k3s.cattle.io to become available"
time="2020-12-03T16:07:51.470920158Z" level=info msg="Done waiting for CRD addons.k3s.cattle.io to become available"
time="2020-12-03T16:07:51.471007997Z" level=info msg="Waiting for CRD helmcharts.helm.cattle.io to become available"
time="2020-12-03T16:07:51.940847883Z" level=info msg="Waiting for cloudcontroller rbac role to be created"
time="2020-12-03T16:07:51.977304529Z" level=info msg="Done waiting for CRD helmcharts.helm.cattle.io to become available"
time="2020-12-03T16:07:51.988272695Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.81.0.tgz"
time="2020-12-03T16:07:51.988780730Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml"
time="2020-12-03T16:07:51.988970920Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml"
time="2020-12-03T16:07:51.989122528Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml"
time="2020-12-03T16:07:51.989270641Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml"
time="2020-12-03T16:07:51.989392851Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml"
time="2020-12-03T16:07:51.989562638Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
time="2020-12-03T16:07:51.989704953Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml"
time="2020-12-03T16:07:51.989925557Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
time="2020-12-03T16:07:51.990071348Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml"
time="2020-12-03T16:07:51.990197074Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml"
time="2020-12-03T16:07:51.990319454Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml"
time="2020-12-03T16:07:51.990520135Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
time="2020-12-03T16:07:52.091109100Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
time="2020-12-03T16:07:52.091229283Z" level=info msg="Waiting for master node  startup: resource name may not be empty"
time="2020-12-03T16:07:52.091710358Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
time="2020-12-03T16:07:52.091821879Z" level=info msg="To join node to cluster: k3s agent -s https://172.19.0.2:6443 -t ${NODE_TOKEN}"
I1203 16:07:52.121373       6 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io
I1203 16:07:52.133953       6 controller.go:606] quota admission added evaluator for: serviceaccounts
2020-12-03 16:07:52.146079 I | http: TLS handshake error from 127.0.0.1:49136: remote error: tls: bad certificate
time="2020-12-03T16:07:52.154186729Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"
time="2020-12-03T16:07:52.154216969Z" level=info msg="Run: k3s kubectl"
time="2020-12-03T16:07:52.154224715Z" level=info msg="k3s is up and running"
time="2020-12-03T16:07:52.154378124Z" level=info msg="module overlay was already loaded"
time="2020-12-03T16:07:52.154418250Z" level=info msg="module nf_conntrack was already loaded"
time="2020-12-03T16:07:52.154443766Z" level=info msg="module br_netfilter was already loaded"
2020-12-03 16:07:52.156024 I | http: TLS handshake error from 127.0.0.1:49144: remote error: tls: bad certificate
2020-12-03 16:07:52.161532 I | http: TLS handshake error from 127.0.0.1:49150: remote error: tls: bad certificate
I1203 16:07:52.177894       6 controller.go:606] quota admission added evaluator for: deployments.apps
time="2020-12-03T16:07:52.192087774Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller"
time="2020-12-03T16:07:52.192127700Z" level=info msg="Starting /v1, Kind=Endpoints controller"
time="2020-12-03T16:07:52.192103963Z" level=info msg="Starting /v1, Kind=Node controller"
time="2020-12-03T16:07:52.192106764Z" level=info msg="Starting /v1, Kind=Service controller"
time="2020-12-03T16:07:52.192114509Z" level=info msg="Starting batch/v1, Kind=Job controller"
time="2020-12-03T16:07:52.192116800Z" level=info msg="Starting /v1, Kind=Pod controller"
time="2020-12-03T16:07:52.255603971Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
time="2020-12-03T16:07:52.255729246Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
time="2020-12-03T16:07:52.255842860Z" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory\""
time="2020-12-03T16:07:52.350834576Z" level=info msg="Starting /v1, Kind=Secret controller"
time="2020-12-03T16:07:52.358136908Z" level=info msg="Active TLS secret k3s-serving (ver=182) (count 8): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.19.0.2:172.19.0.2 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/hash:df2ece2e551e24fb8e81154f0d8023b5da59d8358726e78ba0a117279ccdcada]"
I1203 16:07:52.410742       6 request.go:621] Throttling request took 1.044629112s, request: GET:https://127.0.0.1:6444/apis/apiextensions.k8s.io/v1beta1?timeout=32s
I1203 16:07:52.415369       6 controller.go:606] quota admission added evaluator for: helmcharts.helm.cattle.io
I1203 16:07:53.114081       6 plugins.go:100] No cloud provider specified.
I1203 16:07:53.118575       6 shared_informer.go:223] Waiting for caches to sync for tokens
I1203 16:07:53.218767       6 shared_informer.go:230] Caches are synced for tokens 
time="2020-12-03T16:07:53.256540686Z" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory\""
I1203 16:07:53.297790       6 trace.go:116] Trace[325877377]: "Create" url:/apis/k3s.cattle.io/v1/namespaces/kube-system/addons,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf,client:127.0.0.1 (started: 2020-12-03 16:07:52.592287061 +0000 UTC m=+10.980395300) (total time: 705.407252ms):
Trace[325877377]: [705.16385ms] [704.921366ms] Object stored in database
time="2020-12-03T16:07:53.306017141Z" level=info msg="Waiting for master node k3d-dev-server-0 startup: nodes \"k3d-dev-server-0\" not found"
time="2020-12-03T16:07:53.306393101Z" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --allow-untagged-cloud=true --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --node-status-update-frequency=1m --secure-port=0"
Flag --allow-untagged-cloud has been deprecated, This flag is deprecated and will be removed in a future release. A cluster-id will be required on cloud instances.
I1203 16:07:53.314150       6 controllermanager.go:120] Version: v1.18.9+k3s1
W1203 16:07:53.314173       6 controllermanager.go:132] detected a cluster without a ClusterID.  A ClusterID will be required in the future.  Please tag your cluster to avoid any future issues
I1203 16:07:53.319026       6 node_controller.go:110] Sending events to api server.
I1203 16:07:53.319114       6 controllermanager.go:247] Started "cloud-node"
I1203 16:07:53.321688       6 node_lifecycle_controller.go:78] Sending events to api server
I1203 16:07:53.321854       6 controllermanager.go:247] Started "cloud-node-lifecycle"
E1203 16:07:53.327980       6 core.go:90] Failed to start service controller: the cloud provider does not support external load balancers
W1203 16:07:53.328000       6 controllermanager.go:244] Skipping "service"
W1203 16:07:53.328009       6 core.go:108] configure-cloud-routes is set, but cloud provider does not support routes. Will not configure cloud provider routes.
W1203 16:07:53.328015       6 controllermanager.go:244] Skipping "route"
E1203 16:07:53.342815       6 namespaced_resources_deleter.go:161] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1203 16:07:53.342941       6 controllermanager.go:533] Started "namespace"
I1203 16:07:53.342969       6 namespace_controller.go:200] Starting namespace controller
I1203 16:07:53.342983       6 shared_informer.go:223] Waiting for caches to sync for namespace
I1203 16:07:53.357010       6 controllermanager.go:533] Started "daemonset"
I1203 16:07:53.357215       6 daemon_controller.go:286] Starting daemon sets controller
I1203 16:07:53.357253       6 shared_informer.go:223] Waiting for caches to sync for daemon sets
I1203 16:07:53.372809       6 controllermanager.go:533] Started "csrcleaner"
I1203 16:07:53.372902       6 cleaner.go:82] Starting CSR cleaner controller
I1203 16:07:53.388564       6 controllermanager.go:533] Started "endpoint"
I1203 16:07:53.388711       6 endpoints_controller.go:181] Starting endpoint controller
I1203 16:07:53.388744       6 shared_informer.go:223] Waiting for caches to sync for endpoint
I1203 16:07:53.403379       6 controllermanager.go:533] Started "replicationcontroller"
I1203 16:07:53.403462       6 replica_set.go:182] Starting replicationcontroller controller
I1203 16:07:53.403486       6 shared_informer.go:223] Waiting for caches to sync for ReplicationController
I1203 16:07:53.423987       6 trace.go:116] Trace[1349030457]: "Create" url:/api/v1/namespaces/kube-system/serviceaccounts,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf,client:127.0.0.1 (started: 2020-12-03 16:07:52.420582014 +0000 UTC m=+10.808690254) (total time: 1.003361285s):
Trace[1349030457]: [1.003253236s] [1.003139643s] Object stored in database
E1203 16:07:53.445716       6 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1203 16:07:53.578492       6 controller.go:606] quota admission added evaluator for: jobs.batch
E1203 16:07:53.873367       6 memcache.go:111] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W1203 16:07:53.876164       6 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1203 16:07:53.892595       6 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
time="2020-12-03T16:07:54.257402453Z" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory\""
W1203 16:07:54.285598       6 handler_proxy.go:102] no RequestInfo found in the context
E1203 16:07:54.285771       6 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I1203 16:07:54.285804       6 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
time="2020-12-03T16:07:54.311564393Z" level=info msg="Waiting for master node k3d-dev-server-0 startup: nodes \"k3d-dev-server-0\" not found"
E1203 16:07:54.328826       6 memcache.go:111] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1203 16:07:54.334571       6 garbagecollector.go:133] Starting garbage collector controller
I1203 16:07:54.334615       6 shared_informer.go:223] Waiting for caches to sync for garbage collector
I1203 16:07:54.334725       6 graph_builder.go:282] GraphBuilder running
I1203 16:07:54.334919       6 controllermanager.go:533] Started "garbagecollector"
I1203 16:07:54.341589       6 controllermanager.go:533] Started "csrapproving"
W1203 16:07:54.341625       6 controllermanager.go:512] "bootstrapsigner" is disabled
I1203 16:07:54.341653       6 certificate_controller.go:119] Starting certificate controller "csrapproving"
I1203 16:07:54.341689       6 shared_informer.go:223] Waiting for caches to sync for certificate-csrapproving
I1203 16:07:54.361259       6 controllermanager.go:533] Started "pv-protection"
W1203 16:07:54.361326       6 controllermanager.go:525] Skipping "root-ca-cert-publisher"
I1203 16:07:54.361373       6 pv_protection_controller.go:83] Starting PV protection controller
I1203 16:07:54.361399       6 shared_informer.go:223] Waiting for caches to sync for PV protection
I1203 16:07:54.379489       6 controllermanager.go:533] Started "serviceaccount"
I1203 16:07:54.379591       6 serviceaccounts_controller.go:117] Starting service account controller
I1203 16:07:54.379616       6 shared_informer.go:223] Waiting for caches to sync for service account
I1203 16:07:54.412479       6 controllermanager.go:533] Started "horizontalpodautoscaling"
I1203 16:07:54.412636       6 horizontal.go:169] Starting HPA controller
I1203 16:07:54.412672       6 shared_informer.go:223] Waiting for caches to sync for HPA
I1203 16:07:54.429827       6 node_lifecycle_controller.go:78] Sending events to api server
E1203 16:07:54.429939       6 core.go:229] failed to start cloud node lifecycle controller: no cloud provider provided
W1203 16:07:54.429970       6 controllermanager.go:525] Skipping "cloud-node-lifecycle"
I1203 16:07:54.443217       6 controllermanager.go:533] Started "persistentvolume-binder"
I1203 16:07:54.443312       6 pv_controller_base.go:295] Starting persistent volume controller
I1203 16:07:54.443341       6 shared_informer.go:223] Waiting for caches to sync for persistent volume
I1203 16:07:54.459438       6 controllermanager.go:533] Started "clusterrole-aggregation"
I1203 16:07:54.459570       6 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator
I1203 16:07:54.459609       6 shared_informer.go:223] Waiting for caches to sync for ClusterRoleAggregator
I1203 16:07:54.477334       6 controllermanager.go:533] Started "replicaset"
I1203 16:07:54.477507       6 replica_set.go:182] Starting replicaset controller
I1203 16:07:54.477546       6 shared_informer.go:223] Waiting for caches to sync for ReplicaSet
I1203 16:07:54.491987       6 controllermanager.go:533] Started "ttl"
I1203 16:07:54.492017       6 ttl_controller.go:118] Starting TTL controller
I1203 16:07:54.492026       6 shared_informer.go:223] Waiting for caches to sync for TTL
I1203 16:07:54.499233       6 node_lifecycle_controller.go:384] Sending events to api server.
I1203 16:07:54.499658       6 taint_manager.go:163] Sending events to api server.
I1203 16:07:54.499826       6 node_lifecycle_controller.go:512] Controller will reconcile labels.
I1203 16:07:54.499885       6 controllermanager.go:533] Started "nodelifecycle"
W1203 16:07:54.499912       6 controllermanager.go:525] Skipping "ttl-after-finished"
I1203 16:07:54.499955       6 node_lifecycle_controller.go:546] Starting node controller
I1203 16:07:54.499991       6 shared_informer.go:223] Waiting for caches to sync for taint
I1203 16:07:54.561990       6 controllermanager.go:533] Started "disruption"
I1203 16:07:54.562045       6 disruption.go:331] Starting disruption controller
I1203 16:07:54.562078       6 shared_informer.go:223] Waiting for caches to sync for disruption
I1203 16:07:54.615502       6 controllermanager.go:533] Started "csrsigning"
W1203 16:07:54.615547       6 core.go:243] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.
W1203 16:07:54.615561       6 controllermanager.go:525] Skipping "route"
I1203 16:07:54.615582       6 certificate_controller.go:119] Starting certificate controller "csrsigning"
I1203 16:07:54.615622       6 shared_informer.go:223] Waiting for caches to sync for certificate-csrsigning
I1203 16:07:54.615664       6 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/server-ca.crt::/var/lib/rancher/k3s/server/tls/server-ca.key
I1203 16:07:55.208954       6 trace.go:116] Trace[360670374]: "Create" url:/apis/batch/v1/namespaces/kube-system/jobs,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf,client:127.0.0.1 (started: 2020-12-03 16:07:53.564301081 +0000 UTC m=+11.952409384) (total time: 1.644551429s):
Trace[360670374]: [1.643191471s] [1.631424149s] Object stored in database
E1203 16:07:55.226819       6 resource_quota_controller.go:161] initial discovery check failure, continuing and counting on future sync update: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1203 16:07:55.227003       6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions
I1203 16:07:55.227093       6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps
I1203 16:07:55.227171       6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io
I1203 16:07:55.227217       6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch
I1203 16:07:55.227270       6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
I1203 16:07:55.227327       6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for addons.k3s.cattle.io
I1203 16:07:55.227453       6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
I1203 16:07:55.228049       6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
I1203 16:07:55.228154       6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for helmcharts.helm.cattle.io
I1203 16:07:55.228236       6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps
I1203 16:07:55.228306       6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
I1203 16:07:55.228381       6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
I1203 16:07:55.228496       6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps
I1203 16:07:55.228569       6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
I1203 16:07:55.228998       6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts
I1203 16:07:55.229108       6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints
I1203 16:07:55.229161       6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps
I1203 16:07:55.229213       6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
I1203 16:07:55.229277       6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
I1203 16:07:55.229346       6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates
I1203 16:07:55.229444       6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges
I1203 16:07:55.229514       6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps
I1203 16:07:55.229578       6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
I1203 16:07:55.229613       6 controllermanager.go:533] Started "resourcequota"
I1203 16:07:55.229670       6 resource_quota_controller.go:272] Starting resource quota controller
I1203 16:07:55.229708       6 shared_informer.go:223] Waiting for caches to sync for resource quota
I1203 16:07:55.229744       6 resource_quota_monitor.go:303] QuotaMonitor running
I1203 16:07:55.269716       6 controllermanager.go:533] Started "statefulset"
I1203 16:07:55.269764       6 stateful_set.go:146] Starting stateful set controller
I1203 16:07:55.269786       6 shared_informer.go:223] Waiting for caches to sync for stateful set
I1203 16:07:55.278023       6 node_ipam_controller.go:94] Sending events to api server.
time="2020-12-03T16:07:55.281248575Z" level=info msg="Connecting to proxy" url="wss://172.19.0.2:6443/v1-k3s/connect"
time="2020-12-03T16:07:55.288128885Z" level=info msg="Handling backend connection request [k3d-dev-server-0]"
time="2020-12-03T16:07:55.289288493Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"
time="2020-12-03T16:07:55.290758723Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=/run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-dev-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/systemd --node-labels= --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --runtime-cgroups=/systemd --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
time="2020-12-03T16:07:55.290990849Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=k3d-dev-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
W1203 16:07:55.291155       6 server.go:225] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
I1203 16:07:55.292194       6 server.go:413] Version: v1.18.9+k3s1
I1203 16:07:55.300038       6 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt
W1203 16:07:55.302595       6 fs.go:206] stat failed on /dev/mapper/root with error: no such file or directory
time="2020-12-03T16:07:55.306158736Z" level=info msg="waiting for node k3d-dev-server-0: nodes \"k3d-dev-server-0\" not found"
W1203 16:07:55.311202       6 info.go:51] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"
I1203 16:07:55.312119       6 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
I1203 16:07:55.313140       6 container_manager_linux.go:277] container manager verified user specified cgroup-root exists: []
I1203 16:07:55.313171       6 container_manager_linux.go:282] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/systemd SystemCgroupsName: KubeletCgroupsName:/systemd ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:false CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
I1203 16:07:55.313347       6 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
I1203 16:07:55.313370       6 container_manager_linux.go:312] [topologymanager] Initializing Topology Manager with none policy
I1203 16:07:55.313386       6 container_manager_linux.go:317] Creating device plugin manager: true
W1203 16:07:55.313654       6 util_unix.go:103] Using "/run/k3s/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/k3s/containerd/containerd.sock".
W1203 16:07:55.313786       6 util_unix.go:103] Using "/run/k3s/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/k3s/containerd/containerd.sock".
I1203 16:07:55.313890       6 kubelet.go:317] Watching apiserver
time="2020-12-03T16:07:55.316370678Z" level=info msg="Waiting for master node k3d-dev-server-0 startup: nodes \"k3d-dev-server-0\" not found"
W1203 16:07:55.326784       6 proxier.go:625] Failed to read file /lib/modules/5.9.3-zen1-1-zen/modules.builtin with error open /lib/modules/5.9.3-zen1-1-zen/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
I1203 16:07:55.327099       6 kuberuntime_manager.go:217] Container runtime containerd initialized, version: v1.3.3-k3s2, apiVersion: v1alpha2
W1203 16:07:55.327502       6 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
I1203 16:07:55.328161       6 server.go:1124] Started kubelet
I1203 16:07:55.328268       6 server.go:145] Starting to listen on 0.0.0.0:10250
W1203 16:07:55.329498       6 fs.go:540] stat failed on /dev/mapper/root with error: no such file or directory
E1203 16:07:55.329544       6 cri_stats_provider.go:375] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": failed to get device for dir "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": could not find device with major: 0, minor: 27 in cached partitions map.
E1203 16:07:55.329583       6 kubelet.go:1308] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
I1203 16:07:55.330851       6 server.go:393] Adding debug handlers to kubelet server.
I1203 16:07:55.332611       6 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
I1203 16:07:55.332840       6 volume_manager.go:265] Starting Kubelet Volume Manager
I1203 16:07:55.333053       6 desired_state_of_world_populator.go:139] Desired state populator starts to run
E1203 16:07:55.344404       6 controller.go:228] failed to get node "k3d-dev-server-0" when trying to set owner ref to the node lease: nodes "k3d-dev-server-0" not found
I1203 16:07:55.354346       6 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
I1203 16:07:55.356824       6 cpu_manager.go:184] [cpumanager] starting with none policy
I1203 16:07:55.356846       6 cpu_manager.go:185] [cpumanager] reconciling every 10s
I1203 16:07:55.356879       6 state_mem.go:36] [cpumanager] initializing new in-memory state store
I1203 16:07:55.409128       6 status_manager.go:158] Starting to sync pod status with apiserver
I1203 16:07:55.409190       6 kubelet.go:1824] Starting kubelet main sync loop.
E1203 16:07:55.409314       6 kubelet.go:1848] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I1203 16:07:55.433005       6 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
E1203 16:07:55.433133       6 kubelet.go:2270] node "k3d-dev-server-0" not found
I1203 16:07:55.435273       6 kubelet_node_status.go:70] Attempting to register node k3d-dev-server-0
E1203 16:07:55.509545       6 kubelet.go:1848] skipping pod synchronization - container runtime status check may not have completed yet
E1203 16:07:55.533296       6 kubelet.go:2270] node "k3d-dev-server-0" not found
W1203 16:07:55.544094       6 proxier.go:635] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 16:07:55.544839       6 proxier.go:635] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 16:07:55.545573       6 proxier.go:635] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 16:07:55.546206       6 proxier.go:635] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 16:07:55.546735       6 proxier.go:635] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
I1203 16:07:55.631474       6 policy_none.go:43] [cpumanager] none policy: Start
W1203 16:07:55.631619       6 fs.go:540] stat failed on /dev/mapper/root with error: no such file or directory
F1203 16:07:55.631653       6 kubelet.go:1386] Failed to start ContainerManager failed to get rootfs info: failed to get device for dir "/var/lib/kubelet": could not find device with major: 0, minor: 27 in cached partitions map

@iwilltry42
Copy link
Member

@birdiesanders are you on btrfs or the like?
You may want to have a look and try the stuff from our FAQ: https://k3d.io/faq/faq/#issues-with-btrfs (it's about /dev/mapper)

@birdiesanders
Copy link
Author

This works in Arch as a workaround but fails on Fedora, and both Arch and my Fedora system are on btrfs.

@birdiesanders
Copy link
Author

Error from Fedora:

Failed to find cpuset cgroup, you may need to add \"cgroup_enable=cpuset\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"

@iwilltry42
Copy link
Member

Okay, so this is definitely no bug in k3d and also cannot be worked around with k3d features.
It's probably caused by Fedora using cgroups v2 which is still in experimental support (as stated at the end of the docker info output) and your using an RC version of docker.
Here are some more general solutions for docker on Fedora, related to cgroups v2: https://www.linuxuprising.com/2019/11/how-to-install-and-use-docker-on-fedora.html?m=1

@iwilltry42 iwilltry42 added not a bug Luckily this is not a bug with k3d after all ¯\_(ツ)_/¯ and removed bug Something isn't working labels Jan 6, 2021
@iwilltry42 iwilltry42 changed the title [BUG] Fresh install of k3s on Fedora 33 with latest docker fails to create clusters. [QUESTION] Fresh install of k3s on Fedora 33 with latest docker fails to create clusters. Jan 6, 2021
@iwilltry42
Copy link
Member

Hi @birdiesanders , did you get any further with this issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
not a bug Luckily this is not a bug with k3d after all ¯\_(ツ)_/¯
Projects
None yet
Development

No branches or pull requests

2 participants