Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] k3d v5.1.0 not starting on Docker 20.10.11 #873

Closed
applike-ss opened this issue Nov 30, 2021 · 11 comments
Closed

[BUG] k3d v5.1.0 not starting on Docker 20.10.11 #873

applike-ss opened this issue Nov 30, 2021 · 11 comments
Assignees
Labels
bug Something isn't working not a bug Luckily this is not a bug with k3d after all ¯\_(ツ)_/¯ runtime Issue with the container runtime (docker)
Milestone

Comments

@applike-ss
Copy link

applike-ss commented Nov 30, 2021

What did you do

  • How was the cluster created?

    • k3d cluster create (output appended was generated with --trace)
  • What did you do afterwards?

    • heavy breathing

What did you expect to happen

Cluster should come up and be usable

Screenshots or terminal output

DEBU[0000] DOCKER_SOCK=/var/run/docker.sock             
DEBU[0000] Runtime Info:
&{Name:docker Endpoint:/var/run/docker.sock Version:20.10.11 OSType:linux OS:Ubuntu 20.04.3 LTS Arch:x86_64 CgroupVersion:1 CgroupDriver:cgroupfs Filesystem:extfs} 
DEBU[0000] Additional CLI Configuration:
cli:
  api-port: ""
  env: []
  k3s-node-labels: []
  k3sargs: []
  ports: []
  registries:
    create: ""
  runtime-labels: []
  volumes: [] 
DEBU[0000] Configuration:
agents: 0
image: docker.io/rancher/k3s:v1.21.5-k3s2
network: ""
options:
  k3d:
    disableimagevolume: false
    disableloadbalancer: false
    disablerollback: false
    loadbalancer:
      configoverrides: []
    timeout: 0s
    wait: true
  kubeconfig:
    switchcurrentcontext: true
    updatedefaultkubeconfig: true
  runtime:
    agentsmemory: ""
    gpurequest: ""
    serversmemory: ""
registries:
  config: ""
  use: []
servers: 1
subnet: ""
token: "" 
TRAC[0000] Trying to read config apiVersion='k3d.io/v1alpha3', kind='simple' 
DEBU[0000] ========== Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha3} Name: Servers:1 Agents:0 ExposeAPI:{Host: HostIP: HostPort:} Image:docker.io/rancher/k3s:v1.21.5-k3s2 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: Labels:[]}} Env:[] Registries:{Use:[] Create:<nil> Config:}}
========================== 
TRAC[0000] VolumeFilterMap: map[]                       
TRAC[0000] PortFilterMap: map[]                         
TRAC[0000] K3sNodeLabelFilterMap: map[]                 
TRAC[0000] RuntimeLabelFilterMap: map[]                 
TRAC[0000] EnvFilterMap: map[]                          
DEBU[0000] ========== Merged Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha3} Name: Servers:1 Agents:0 ExposeAPI:{Host: HostIP: HostPort:34443} Image:docker.io/rancher/k3s:v1.21.5-k3s2 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: Labels:[]}} Env:[] Registries:{Use:[] Create:<nil> Config:}}
========================== 
DEBU[0000] generated loadbalancer config:
ports:
  6443.tcp:
  - k3d-k3s-default-server-0
settings:
  workerConnections: 1024 
DEBU[0000] ===== Merged Cluster Config =====
&{TypeMeta:{Kind: APIVersion:} Cluster:{Name:k3s-default Network:{Name:k3d-k3s-default ID: External:false IPAM:{IPPrefix:zero IPPrefix IPsUsed:[] Managed:false} Members:[]} Token: Nodes:[0xc000438a80 0xc000438c00] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc000135040 ServerLoadBalancer:0xc000202630 ImageVolume:} ClusterCreateOpts:{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d] GlobalEnv:[] Registries:{Create:<nil> Use:[] Config:<nil>}} KubeconfigOpts:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true}}
===== ===== ===== 
DEBU[0000] ===== Processed Cluster Config =====
&{TypeMeta:{Kind: APIVersion:} Cluster:{Name:k3s-default Network:{Name:k3d-k3s-default ID: External:false IPAM:{IPPrefix:zero IPPrefix IPsUsed:[] Managed:false} Members:[]} Token: Nodes:[0xc000438a80 0xc000438c00] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc000135040 ServerLoadBalancer:0xc000202630 ImageVolume:} ClusterCreateOpts:{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d] GlobalEnv:[] Registries:{Create:<nil> Use:[] Config:<nil>}} KubeconfigOpts:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true}}
===== ===== ===== 
DEBU[0000] '--kubeconfig-update-default set: enabling wait-for-server 
INFO[0000] Prep: Network                                
INFO[0000] Created network 'k3d-k3s-default'            
INFO[0000] Created volume 'k3d-k3s-default-images'      
TRAC[0000] Using Registries: []                         
TRAC[0000] 
===== Creating Cluster =====

Runtime:
{}

Cluster:
&{Name:k3s-default Network:{Name:k3d-k3s-default ID:47c1411a64628ae26a3e73303f2949bceeb6371f35e6f4b3a6c74aca2be985d6 External:false IPAM:{IPPrefix:172.24.0.0/16 IPsUsed:[] Managed:false} Members:[]} Token: Nodes:[0xc000438a80 0xc000438c00] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc000135040 ServerLoadBalancer:0xc000202630 ImageVolume:k3d-k3s-default-images}

ClusterCreatOpts:
&{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d k3d.cluster.imageVolume:k3d-k3s-default-images k3d.cluster.network:k3d-k3s-default k3d.cluster.network.external:false k3d.cluster.network.id:47c1411a64628ae26a3e73303f2949bceeb6371f35e6f4b3a6c74aca2be985d6 k3d.cluster.network.iprange:172.24.0.0/16] GlobalEnv:[] Registries:{Create:<nil> Use:[] Config:<nil>}}

============================
	 
INFO[0000] Starting new tools node...                   
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock             
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock             
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock             
TRAC[0000] Creating node from spec
&{Name:k3d-k3s-default-tools Role:noRole Image:docker.io/rancher/k3d-tools:5.1.0 Volumes:[k3d-k3s-default-images:/k3d/images /var/run/docker.sock:/var/run/docker.sock] Env:[] Cmd:[] Args:[noop] Ports:map[] Restart:false Created: RuntimeLabels:map[app:k3d k3d.cluster:k3s-default k3d.version:v5.1.0] K3sNodeLabels:map[] Networks:[k3d-k3s-default] ExtraHosts:[] ServerOpts:{IsInit:false KubeAPI:<nil>} AgentOpts:{} GPURequest: Memory: State:{Running:false Status: Started:} IP:{IP:zero IP Static:false} HookActions:[]} 
TRAC[0000] Creating docker container with translated config
&{ContainerConfig:{Hostname:k3d-k3s-default-tools Domainname: User: AttachStdin:false AttachStdout:false AttachStderr:false ExposedPorts:map[] Tty:false OpenStdin:false StdinOnce:false Env:[K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml] Cmd:[noop] Healthcheck:<nil> ArgsEscaped:false Image:docker.io/rancher/k3d-tools:5.1.0 Volumes:map[] WorkingDir: Entrypoint:[] NetworkDisabled:false MacAddress: OnBuild:[] Labels:map[app:k3d k3d.cluster:k3s-default k3d.role:noRole k3d.version:v5.1.0] StopSignal: StopTimeout:<nil> Shell:[]} HostConfig:{Binds:[k3d-k3s-default-images:/k3d/images /var/run/docker.sock:/var/run/docker.sock] ContainerIDFile: LogConfig:{Type: Config:map[]} NetworkMode: PortBindings:map[] RestartPolicy:{Name: MaximumRetryCount:0} AutoRemove:false VolumeDriver: VolumesFrom:[] CapAdd:[] CapDrop:[] CgroupnsMode: DNS:[] DNSOptions:[] DNSSearch:[] ExtraHosts:[] GroupAdd:[] IpcMode: Cgroup: Links:[] OomScoreAdj:0 PidMode: Privileged:true PublishAllPorts:false ReadonlyRootfs:false SecurityOpt:[] StorageOpt:map[] Tmpfs:map[/run: /var/run:] UTSMode: UsernsMode: ShmSize:0 Sysctls:map[] Runtime: ConsoleSize:[0 0] Isolation: Resources:{CPUShares:0 Memory:0 NanoCPUs:0 CgroupParent: BlkioWeight:0 BlkioWeightDevice:[] BlkioDeviceReadBps:[] BlkioDeviceWriteBps:[] BlkioDeviceReadIOps:[] BlkioDeviceWriteIOps:[] CPUPeriod:0 CPUQuota:0 CPURealtimePeriod:0 CPURealtimeRuntime:0 CpusetCpus: CpusetMems: Devices:[] DeviceCgroupRules:[] DeviceRequests:[] KernelMemory:0 KernelMemoryTCP:0 MemoryReservation:0 MemorySwap:0 MemorySwappiness:<nil> OomKillDisable:<nil> PidsLimit:<nil> Ulimits:[] CPUCount:0 CPUPercent:0 IOMaximumIOps:0 IOMaximumBandwidth:0} Mounts:[] MaskedPaths:[] ReadonlyPaths:[] Init:0xc000423bbf} NetworkingConfig:{EndpointsConfig:map[k3d-k3s-default:0xc0004e2000]}} 
DEBU[0000] Created container k3d-k3s-default-tools (ID: 0a322cda339b9cc39f57fdac61235c0c876bba97588b3d6b77e1d4a668ce802c) 
DEBU[0000] Node k3d-k3s-default-tools Start Time: 2021-12-03 10:27:34.147702107 +0100 CET m=+0.138440647 
TRAC[0000] Starting node 'k3d-k3s-default-tools'        
INFO[0000] Starting Node 'k3d-k3s-default-tools'        
DEBU[0000] Truncated 2021-12-03 09:27:34.378413953 +0000 UTC to 2021-12-03 09:27:34 +0000 UTC 
INFO[0001] Creating node 'k3d-k3s-default-server-0'     
DEBU[0001] DOCKER_SOCK=/var/run/docker.sock             
TRAC[0001] Creating node from spec
&{Name:k3d-k3s-default-server-0 Role:server Image:docker.io/rancher/k3s:v1.21.5-k3s2 Volumes:[k3d-k3s-default-images:/k3d/images] Env:[K3S_TOKEN=eAwSVOdRfjsZKJlfvovl] Cmd:[] Args:[] Ports:map[] Restart:true Created: RuntimeLabels:map[app:k3d k3d.cluster:k3s-default k3d.cluster.imageVolume:k3d-k3s-default-images k3d.cluster.network:k3d-k3s-default k3d.cluster.network.external:false k3d.cluster.network.id:47c1411a64628ae26a3e73303f2949bceeb6371f35e6f4b3a6c74aca2be985d6 k3d.cluster.network.iprange:172.24.0.0/16 k3d.cluster.token:eAwSVOdRfjsZKJlfvovl k3d.cluster.url:https://k3d-k3s-default-server-0:6443] K3sNodeLabels:map[] Networks:[k3d-k3s-default] ExtraHosts:[] ServerOpts:{IsInit:false KubeAPI:0xc000135040} AgentOpts:{} GPURequest: Memory: State:{Running:false Status: Started:} IP:{IP:zero IP Static:false} HookActions:[]} 
DEBU[0001] DockerHost:                                  
TRAC[0001] Creating docker container with translated config
&{ContainerConfig:{Hostname:k3d-k3s-default-server-0 Domainname: User: AttachStdin:false AttachStdout:false AttachStderr:false ExposedPorts:map[] Tty:false OpenStdin:false StdinOnce:false Env:[K3S_TOKEN=eAwSVOdRfjsZKJlfvovl K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml] Cmd:[server --tls-san 0.0.0.0] Healthcheck:<nil> ArgsEscaped:false Image:docker.io/rancher/k3s:v1.21.5-k3s2 Volumes:map[] WorkingDir: Entrypoint:[] NetworkDisabled:false MacAddress: OnBuild:[] Labels:map[app:k3d k3d.cluster:k3s-default k3d.cluster.imageVolume:k3d-k3s-default-images k3d.cluster.network:k3d-k3s-default k3d.cluster.network.external:false k3d.cluster.network.id:47c1411a64628ae26a3e73303f2949bceeb6371f35e6f4b3a6c74aca2be985d6 k3d.cluster.network.iprange:172.24.0.0/16 k3d.cluster.token:eAwSVOdRfjsZKJlfvovl k3d.cluster.url:https://k3d-k3s-default-server-0:6443 k3d.role:server k3d.server.api.host:0.0.0.0 k3d.server.api.hostIP:0.0.0.0 k3d.server.api.port:34443 k3d.version:v5.1.0] StopSignal: StopTimeout:<nil> Shell:[]} HostConfig:{Binds:[k3d-k3s-default-images:/k3d/images] ContainerIDFile: LogConfig:{Type: Config:map[]} NetworkMode: PortBindings:map[] RestartPolicy:{Name:unless-stopped MaximumRetryCount:0} AutoRemove:false VolumeDriver: VolumesFrom:[] CapAdd:[] CapDrop:[] CgroupnsMode: DNS:[] DNSOptions:[] DNSSearch:[] ExtraHosts:[] GroupAdd:[] IpcMode: Cgroup: Links:[] OomScoreAdj:0 PidMode: Privileged:true PublishAllPorts:false ReadonlyRootfs:false SecurityOpt:[] StorageOpt:map[] Tmpfs:map[/run: /var/run:] UTSMode: UsernsMode: ShmSize:0 Sysctls:map[] Runtime: ConsoleSize:[0 0] Isolation: Resources:{CPUShares:0 Memory:0 NanoCPUs:0 CgroupParent: BlkioWeight:0 BlkioWeightDevice:[] BlkioDeviceReadBps:[] BlkioDeviceWriteBps:[] BlkioDeviceReadIOps:[] BlkioDeviceWriteIOps:[] CPUPeriod:0 CPUQuota:0 CPURealtimePeriod:0 CPURealtimeRuntime:0 CpusetCpus: CpusetMems: Devices:[] DeviceCgroupRules:[] DeviceRequests:[] KernelMemory:0 KernelMemoryTCP:0 MemoryReservation:0 MemorySwap:0 MemorySwappiness:<nil> OomKillDisable:<nil> PidsLimit:<nil> Ulimits:[] CPUCount:0 CPUPercent:0 IOMaximumIOps:0 IOMaximumBandwidth:0} Mounts:[] MaskedPaths:[] ReadonlyPaths:[] Init:0xc0002503df} NetworkingConfig:{EndpointsConfig:map[k3d-k3s-default:0xc0006bc000]}} 
DEBU[0001] Created container k3d-k3s-default-server-0 (ID: 1bc798f12b22e00ef67e40ecca9adbcc059f34674bef2c70a69a9bf3752fc995) 
DEBU[0001] Created node 'k3d-k3s-default-server-0'      
INFO[0001] Creating LoadBalancer 'k3d-k3s-default-serverlb' 
DEBU[0001] DOCKER_SOCK=/var/run/docker.sock             
TRAC[0001] Creating node from spec
&{Name:k3d-k3s-default-serverlb Role:loadbalancer Image:docker.io/rancher/k3d-proxy:5.1.0 Volumes:[k3d-k3s-default-images:/k3d/images] Env:[] Cmd:[] Args:[] Ports:map[6443:[{HostIP:0.0.0.0 HostPort:34443}]] Restart:true Created: RuntimeLabels:map[app:k3d k3d.cluster:k3s-default k3d.cluster.imageVolume:k3d-k3s-default-images k3d.cluster.network:k3d-k3s-default k3d.cluster.network.external:false k3d.cluster.network.id:47c1411a64628ae26a3e73303f2949bceeb6371f35e6f4b3a6c74aca2be985d6 k3d.cluster.network.iprange:172.24.0.0/16 k3d.cluster.token:eAwSVOdRfjsZKJlfvovl k3d.cluster.url:https://k3d-k3s-default-server-0:6443] K3sNodeLabels:map[] Networks:[k3d-k3s-default] ExtraHosts:[] ServerOpts:{IsInit:false KubeAPI:<nil>} AgentOpts:{} GPURequest: Memory: State:{Running:false Status: Started:} IP:{IP:zero IP Static:false} HookActions:[{Stage:preStart Action:{Runtime:{} Content:[112 111 114 116 115 58 10 32 32 54 52 52 51 46 116 99 112 58 10 32 32 45 32 107 51 100 45 107 51 115 45 100 101 102 97 117 108 116 45 115 101 114 118 101 114 45 48 10 115 101 116 116 105 110 103 115 58 10 32 32 119 111 114 107 101 114 67 111 110 110 101 99 116 105 111 110 115 58 32 49 48 50 52 10] Dest:/etc/confd/values.yaml Mode:-rwxr--r-- Description:Write Loadbalancer Configuration}}]} 
TRAC[0001] Creating docker container with translated config
&{ContainerConfig:{Hostname:k3d-k3s-default-serverlb Domainname: User: AttachStdin:false AttachStdout:false AttachStderr:false ExposedPorts:map[6443:{}] Tty:false OpenStdin:false StdinOnce:false Env:[K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml] Cmd:[] Healthcheck:<nil> ArgsEscaped:false Image:docker.io/rancher/k3d-proxy:5.1.0 Volumes:map[] WorkingDir: Entrypoint:[] NetworkDisabled:false MacAddress: OnBuild:[] Labels:map[app:k3d k3d.cluster:k3s-default k3d.cluster.imageVolume:k3d-k3s-default-images k3d.cluster.network:k3d-k3s-default k3d.cluster.network.external:false k3d.cluster.network.id:47c1411a64628ae26a3e73303f2949bceeb6371f35e6f4b3a6c74aca2be985d6 k3d.cluster.network.iprange:172.24.0.0/16 k3d.cluster.token:eAwSVOdRfjsZKJlfvovl k3d.cluster.url:https://k3d-k3s-default-server-0:6443 k3d.role:loadbalancer k3d.version:v5.1.0] StopSignal: StopTimeout:<nil> Shell:[]} HostConfig:{Binds:[k3d-k3s-default-images:/k3d/images] ContainerIDFile: LogConfig:{Type: Config:map[]} NetworkMode: PortBindings:map[6443:[{HostIP:0.0.0.0 HostPort:34443}]] RestartPolicy:{Name:unless-stopped MaximumRetryCount:0} AutoRemove:false VolumeDriver: VolumesFrom:[] CapAdd:[] CapDrop:[] CgroupnsMode: DNS:[] DNSOptions:[] DNSSearch:[] ExtraHosts:[] GroupAdd:[] IpcMode: Cgroup: Links:[] OomScoreAdj:0 PidMode: Privileged:true PublishAllPorts:false ReadonlyRootfs:false SecurityOpt:[] StorageOpt:map[] Tmpfs:map[/run: /var/run:] UTSMode: UsernsMode: ShmSize:0 Sysctls:map[] Runtime: ConsoleSize:[0 0] Isolation: Resources:{CPUShares:0 Memory:0 NanoCPUs:0 CgroupParent: BlkioWeight:0 BlkioWeightDevice:[] BlkioDeviceReadBps:[] BlkioDeviceWriteBps:[] BlkioDeviceReadIOps:[] BlkioDeviceWriteIOps:[] CPUPeriod:0 CPUQuota:0 CPURealtimePeriod:0 CPURealtimeRuntime:0 CpusetCpus: CpusetMems: Devices:[] DeviceCgroupRules:[] DeviceRequests:[] KernelMemory:0 KernelMemoryTCP:0 MemoryReservation:0 MemorySwap:0 MemorySwappiness:<nil> OomKillDisable:<nil> PidsLimit:<nil> Ulimits:[] CPUCount:0 CPUPercent:0 IOMaximumIOps:0 IOMaximumBandwidth:0} Mounts:[] MaskedPaths:[] ReadonlyPaths:[] Init:0xc0006aa8ff} NetworkingConfig:{EndpointsConfig:map[k3d-k3s-default:0xc0004e20c0]}} 
DEBU[0001] Created container k3d-k3s-default-serverlb (ID: f29cf40882540e61f06d4dbbcf452fc33af7ff5875508aaf67fc77ac9fe12079) 
DEBU[0001] Created loadbalancer 'k3d-k3s-default-serverlb' 
DEBU[0001] DOCKER_SOCK=/var/run/docker.sock             
INFO[0001] Using the k3d-tools node to gather environment information 
TRAC[0001] TranslateContainerDetailsToNode: Checking for default object label app=k3d on container /k3d-k3s-default-tools 
DEBU[0001] no netlabel present on container /k3d-k3s-default-tools 
DEBU[0001] failed to get IP for container /k3d-k3s-default-tools as we couldn't find the cluster network 
DEBU[0001] DOCKER_SOCK=/var/run/docker.sock             
TRAC[0001] GOOS: linux / Runtime OS: linux (Ubuntu 20.04.3 LTS) 
INFO[0001] HostIP: using network gateway...             
INFO[0001] Starting cluster 'k3s-default'               
INFO[0001] Starting servers...                          
DEBU[0001] Deleting node k3d-k3s-default-tools ...      
DEBU[0001] DOCKER_SOCK=/var/run/docker.sock             
DEBU[0001] No fix enabled.                              
DEBU[0001] Node k3d-k3s-default-server-0 Start Time: 2021-12-03 10:27:35.278306546 +0100 CET m=+1.269045086 
TRAC[0001] Starting node 'k3d-k3s-default-server-0'     
INFO[0001] Starting Node 'k3d-k3s-default-server-0'     
TRAC[0001] [Docker] Deleted Container k3d-k3s-default-tools 
DEBU[0001] Truncated 2021-12-03 09:27:35.586266359 +0000 UTC to 2021-12-03 09:27:35 +0000 UTC 
DEBU[0001] Waiting for node k3d-k3s-default-server-0 to get ready (Log: 'k3s is up and running') 
TRAC[0001] NodeWaitForLogMessage: Node 'k3d-k3s-default-server-0' waiting for log message 'k3s is up and running' since '2021-12-03 09:27:35 +0000 UTC' 
ERRO[0003] Failed Cluster Start: Failed to start server k3d-k3s-default-server-0: Node k3d-k3s-default-server-0 failed to get ready: error waiting for log line `k3s is up and running` from node 'k3d-k3s-default-server-0': stopped returning log lines 
ERRO[0003] Failed to create cluster >>> Rolling Back    
INFO[0003] Deleting cluster 'k3s-default'               
TRAC[0003] TranslateContainerDetailsToNode: Checking for default object label app=k3d on container /k3d-k3s-default-serverlb 
TRAC[0003] TranslateContainerDetailsToNode: Checking for default object label app=k3d on container /k3d-k3s-default-server-0 
DEBU[0003] Cluster Details: &{Name:k3s-default Network:{Name:k3d-k3s-default ID:47c1411a64628ae26a3e73303f2949bceeb6371f35e6f4b3a6c74aca2be985d6 External:false IPAM:{IPPrefix:172.24.0.0/16 IPsUsed:[] Managed:false} Members:[]} Token:eAwSVOdRfjsZKJlfvovl Nodes:[0xc000438a80 0xc000438c00] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc000135040 ServerLoadBalancer:0xc000202630 ImageVolume:k3d-k3s-default-images} 
DEBU[0003] Deleting node k3d-k3s-default-serverlb ...   
TRAC[0003] [Docker] Deleted Container k3d-k3s-default-serverlb 
DEBU[0003] Deleting node k3d-k3s-default-server-0 ...   
TRAC[0004] [Docker] Deleted Container k3d-k3s-default-server-0 
INFO[0004] Deleting cluster network 'k3d-k3s-default'   
INFO[0004] Deleting image volume 'k3d-k3s-default-images' 
FATA[0004] Cluster creation FAILED, all changes have been rolled back!

Which OS & Architecture

  • Linux 5.15.5

Which version of k3d

  • k3d version v5.1.0
  • k3s version v1.21.5-k3s2 (default)

Which version of docker

output of docker version:

Client: Docker Engine - Community
 Version:           20.10.11
 API version:       1.41
 Go version:        go1.16.9
 Git commit:        dea9396
 Built:             Thu Nov 18 00:37:06 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.11
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.9
  Git commit:       847da18
  Built:            Thu Nov 18 00:35:15 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.12
  GitCommit:        7b11cfaabd73bb80907dd23182b9347b4245eb5d
 runc:
  Version:          1.0.2
  GitCommit:        v1.0.2-0-g52b36a2
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

and docker info:

Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc., v0.6.3-docker)
  scan: Docker Scan (Docker Inc., v0.9.0)

Server:
 Containers: 15
  Running: 0
  Paused: 0
  Stopped: 15
 Images: 343
 Server Version: 20.10.11
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: nvidia runc io.containerd.runc.v2 io.containerd.runtime.v1.linux
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 7b11cfaabd73bb80907dd23182b9347b4245eb5d
 runc version: v1.0.2-0-g52b36a2
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.15.5-xanmod1-tt
 Operating System: Ubuntu 20.04.3 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 24
 Total Memory: 62.79GiB
 Name: workhorse
 ID: VYZ5:2OMJ:CDIJ:SPHQ:6RQN:MPD7:JUNP:27BG:XB4C:HPET:GMZS:UMMZ
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Username: justdice
 Registry: https://index.docker.io/v1/
 Labels:
  provider=generic
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
WARNING: No cpu shares support

@applike-ss applike-ss added the bug Something isn't working label Nov 30, 2021
@applike-ss applike-ss changed the title [BUG] k3d v5.1.0 not starting on 20.10.11 [BUG] k3d v5.1.0 not starting on Docker 20.10.11 Dec 1, 2021
@iwilltry42 iwilltry42 self-assigned this Dec 2, 2021
@iwilltry42 iwilltry42 added this to the v5.1.1 milestone Dec 2, 2021
@iwilltry42
Copy link
Member

Hi @applike-ss , thanks for opening this issue!
Now those logs look super weird O.o

�[37mTRAC�[0m[0001] NodeWaitForLogMessage: Node 'k3d-k3s-default-server-0' waiting for log message 'k3s is up and running' since '2021-11-30 11:31:51 +0000 UTC' 
�[36mINFO�[0m[0003] Deleting cluster 'k3s-default'   

Where the hell does this jump from "creating" to "deleting" come from? I cannot see a way this could happen without any error/fatal log message 🤔
Are there any log lines missing from your output?

I'm on the exact same Kernel, Docker Client/Server, k3d/K3s versions on Ubuntu (PopOS), without any problems 🤔

@applike-ss
Copy link
Author

@iwilltry42 I submitted the complete log, nothings stripped out or so. Yeah i had a "not starting up" issue with the k3d version 4 as well and since i wanted to try some stuff out on a temporary k8s cluster, i though i'd try the update and check if that works. Now with it not working (and me getting more devops work recently), it would be nice to get it working somehow.

Is there anything i can try out or information that i could forward to you?

@iwilltry42
Copy link
Member

So I'm really lost here. There's no indication of failure, k3d doesn't log anything about its intention to rollback, there's no k3d cluster delete command issued and there's no final log line saying that the cluster was fully removed.
Something's really fishy here.
Could be connected to the custom kernel you're using, but even then.. where are the logs? 🤷‍♂️

Can you maybe try again with --no-rollback appended to the cluster create command and then check docker ps -a for the server-0 container and get the logs from it?

@applike-ss
Copy link
Author

i just checked again and my output redirection was broken. I updated the logs in the first post. SORRY!!!

@iwilltry42
Copy link
Member

Ah now that makes some more sense 😁
Then please do k3d cluster create --trace --no-rollback and once it finished (and failed) fetch the logs from the failed container: docker logs k3d-k3s-default-server-0 and paste them here for inspection 👍

@applike-ss
Copy link
Author

These are the logs:

time="2021-12-03T09:32:13.999417438Z" level=info msg="Starting k3s v1.21.5+k3s2 (724ef700)"
time="2021-12-03T09:32:14.025676132Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
time="2021-12-03T09:32:14.025690318Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
time="2021-12-03T09:32:14.034020999Z" level=info msg="Database tables and indexes are up to date"
time="2021-12-03T09:32:14.034885000Z" level=info msg="Kine listening on unix://kine.sock"
time="2021-12-03T09:32:14.043549140Z" level=info msg="certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:14 +0000 UTC"
time="2021-12-03T09:32:14.043896877Z" level=info msg="certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:14 +0000 UTC"
time="2021-12-03T09:32:14.044228272Z" level=info msg="certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:14 +0000 UTC"
time="2021-12-03T09:32:14.044602268Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-client-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:14 +0000 UTC"
time="2021-12-03T09:32:14.044935176Z" level=info msg="certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:14 +0000 UTC"
time="2021-12-03T09:32:14.045269587Z" level=info msg="certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:14 +0000 UTC"
time="2021-12-03T09:32:14.045621912Z" level=info msg="certificate CN=k3s-cloud-controller-manager signed by CN=k3s-client-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:14 +0000 UTC"
time="2021-12-03T09:32:14.046185926Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:14 +0000 UTC"
time="2021-12-03T09:32:14.046737377Z" level=info msg="certificate CN=system:auth-proxy signed by CN=k3s-request-header-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:14 +0000 UTC"
time="2021-12-03T09:32:14.047281534Z" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:14 +0000 UTC"
time="2021-12-03T09:32:14.047590737Z" level=info msg="certificate CN=etcd-client signed by CN=etcd-server-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:14 +0000 UTC"
time="2021-12-03T09:32:14.048114806Z" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:14 +0000 UTC"
time="2021-12-03T09:32:14.343093231Z" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:14 +0000 UTC"
time="2021-12-03T09:32:14.343426601Z" level=info msg="Active TLS secret  (ver=) (count 10): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.26.0.3:172.26.0.3 listener.cattle.io/cn-k3d-k3s-default-server-0:k3d-k3s-default-server-0 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=03F7DE1B39B8E11459A58B40E9997ABEBC255F34]"
time="2021-12-03T09:32:14.347475692Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
I1203 09:32:14.348109       7 server.go:656] external host was not specified, using 172.26.0.3
I1203 09:32:14.348248       7 server.go:195] Version: v1.21.5+k3s2
I1203 09:32:14.572112       7 shared_informer.go:240] Waiting for caches to sync for node_authorizer
I1203 09:32:14.572778       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1203 09:32:14.572786       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1203 09:32:14.573418       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1203 09:32:14.573424       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1203 09:32:14.585721       7 instance.go:283] Using reconciler: lease
I1203 09:32:14.609202       7 rest.go:130] the default service ipfamily for this cluster is: IPv4
W1203 09:32:14.893117       7 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:14.900177       7 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:14.902598       7 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:14.906576       7 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:14.908215       7 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:14.912147       7 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources.
W1203 09:32:14.912154       7 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources.
I1203 09:32:14.918947       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1203 09:32:14.918956       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
time="2021-12-03T09:32:14.925951867Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0"
time="2021-12-03T09:32:14.925997994Z" level=info msg="Waiting for API server to become available"
time="2021-12-03T09:32:14.926308550Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
time="2021-12-03T09:32:14.926620298Z" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --node-status-update-frequency=1m0s --port=0 --profiling=false"
time="2021-12-03T09:32:14.927054888Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
time="2021-12-03T09:32:14.927078102Z" level=info msg="To join node to cluster: k3s agent -s https://172.26.0.3:6443 -t ${NODE_TOKEN}"
time="2021-12-03T09:32:14.927618642Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"
time="2021-12-03T09:32:14.927685268Z" level=info msg="Run: k3s kubectl"
time="2021-12-03T09:32:14.941807679Z" level=info msg="Cluster-Http-Server 2021/12/03 09:32:14 http: TLS handshake error from 127.0.0.1:57096: remote error: tls: bad certificate"
time="2021-12-03T09:32:14.944417153Z" level=info msg="Cluster-Http-Server 2021/12/03 09:32:14 http: TLS handshake error from 127.0.0.1:57102: remote error: tls: bad certificate"
time="2021-12-03T09:32:14.949799140Z" level=info msg="certificate CN=k3d-k3s-default-server-0 signed by CN=k3s-server-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:14 +0000 UTC"
time="2021-12-03T09:32:14.951117538Z" level=info msg="certificate CN=system:node:k3d-k3s-default-server-0,O=system:nodes signed by CN=k3s-client-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:14 +0000 UTC"
time="2021-12-03T09:32:14.979282639Z" level=info msg="Module overlay was already loaded"
time="2021-12-03T09:32:14.979297517Z" level=info msg="Module nf_conntrack was already loaded"
time="2021-12-03T09:32:14.979305472Z" level=info msg="Module br_netfilter was already loaded"
time="2021-12-03T09:32:14.979314339Z" level=info msg="Module iptable_nat was already loaded"
time="2021-12-03T09:32:14.982543603Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600"
time="2021-12-03T09:32:14.982597886Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_max' to 786432"
time="2021-12-03T09:32:14.982610750Z" level=error msg="Failed to set sysctl: open /proc/sys/net/netfilter/nf_conntrack_max: permission denied"
time="2021-12-03T09:32:14.982630908Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400"
time="2021-12-03T09:32:14.983145760Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
time="2021-12-03T09:32:14.983234607Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
time="2021-12-03T09:32:15.984769232Z" level=info msg="Containerd is now running"
time="2021-12-03T09:32:15.992717953Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
time="2021-12-03T09:32:15.994211702Z" level=info msg="Handling backend connection request [k3d-k3s-default-server-0]"
time="2021-12-03T09:32:15.994503633Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"
time="2021-12-03T09:32:15.994543939Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --runtime-cgroups=/k3s --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
time="2021-12-03T09:32:15.994919868Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Flag --cni-bin-dir has been deprecated, will be removed along with dockershim.
Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
W1203 09:32:15.995128       7 server.go:220] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
I1203 09:32:15.995421       7 server.go:436] "Kubelet version" kubeletVersion="v1.21.5+k3s2"
W1203 09:32:15.995424       7 proxier.go:653] Failed to read file /lib/modules/5.15.5-xanmod1-tt/modules.builtin with error open /lib/modules/5.15.5-xanmod1-tt/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:15.995869       7 proxier.go:663] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:15.996118       7 proxier.go:663] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:15.996366       7 proxier.go:663] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:15.996617       7 proxier.go:663] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:15.996863       7 proxier.go:663] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
E1203 09:32:16.000413       7 node.go:161] Failed to retrieve node info: nodes "k3d-k3s-default-server-0" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
I1203 09:32:16.009533       7 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt
E1203 09:32:16.009537       7 server.go:288] "Failed to run kubelet" err="failed to run Kubelet: mountpoint for cpu not found"
time="2021-12-03T09:32:16.550998125Z" level=info msg="Starting k3s v1.21.5+k3s2 (724ef700)"
time="2021-12-03T09:32:16.551165060Z" level=info msg="Cluster bootstrap already complete"
time="2021-12-03T09:32:16.558429649Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:16 +0000 UTC"
time="2021-12-03T09:32:16.559220382Z" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:16 +0000 UTC"
time="2021-12-03T09:32:16.559939328Z" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:16 +0000 UTC"
time="2021-12-03T09:32:16.560642245Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
time="2021-12-03T09:32:16.560669717Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
time="2021-12-03T09:32:16.560735240Z" level=info msg="Database tables and indexes are up to date"
time="2021-12-03T09:32:16.560796957Z" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:16 +0000 UTC"
time="2021-12-03T09:32:16.560947691Z" level=info msg="Active TLS secret  (ver=) (count 11): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.26.0.2:172.26.0.2 listener.cattle.io/cn-172.26.0.3:172.26.0.3 listener.cattle.io/cn-k3d-k3s-default-server-0:k3d-k3s-default-server-0 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=BD9E13ED38C7AED3A9E1D0C6473F6E14913CEDD7]"
time="2021-12-03T09:32:16.561514731Z" level=info msg="Kine listening on unix://kine.sock"
time="2021-12-03T09:32:16.561602607Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
I1203 09:32:16.562459       7 server.go:656] external host was not specified, using 172.26.0.2
I1203 09:32:16.562639       7 server.go:195] Version: v1.21.5+k3s2
I1203 09:32:16.564804       7 shared_informer.go:240] Waiting for caches to sync for node_authorizer
I1203 09:32:16.565451       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1203 09:32:16.565457       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1203 09:32:16.566118       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1203 09:32:16.566128       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1203 09:32:16.578855       7 instance.go:283] Using reconciler: lease
I1203 09:32:16.598197       7 rest.go:130] the default service ipfamily for this cluster is: IPv4
W1203 09:32:16.872350       7 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:16.879085       7 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:16.881441       7 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:16.885385       7 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:16.887035       7 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:16.890216       7 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources.
W1203 09:32:16.890223       7 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources.
I1203 09:32:16.899823       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1203 09:32:16.899842       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
time="2021-12-03T09:32:16.908305283Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0"
time="2021-12-03T09:32:16.908360077Z" level=info msg="Waiting for API server to become available"
time="2021-12-03T09:32:16.908642660Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
time="2021-12-03T09:32:16.908958586Z" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --node-status-update-frequency=1m0s --port=0 --profiling=false"
time="2021-12-03T09:32:16.909430206Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
time="2021-12-03T09:32:16.909442589Z" level=info msg="To join node to cluster: k3s agent -s https://172.26.0.2:6443 -t ${NODE_TOKEN}"
time="2021-12-03T09:32:16.910002546Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"
time="2021-12-03T09:32:16.910031941Z" level=info msg="Run: k3s kubectl"
time="2021-12-03T09:32:16.919786830Z" level=info msg="Cluster-Http-Server 2021/12/03 09:32:16 http: TLS handshake error from 127.0.0.1:57134: remote error: tls: bad certificate"
time="2021-12-03T09:32:16.922155621Z" level=info msg="Cluster-Http-Server 2021/12/03 09:32:16 http: TLS handshake error from 127.0.0.1:57140: remote error: tls: bad certificate"
time="2021-12-03T09:32:16.927353630Z" level=info msg="certificate CN=k3d-k3s-default-server-0 signed by CN=k3s-server-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:16 +0000 UTC"
time="2021-12-03T09:32:16.928607737Z" level=info msg="certificate CN=system:node:k3d-k3s-default-server-0,O=system:nodes signed by CN=k3s-client-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:16 +0000 UTC"
time="2021-12-03T09:32:16.947817388Z" level=info msg="Module overlay was already loaded"
time="2021-12-03T09:32:16.947831595Z" level=info msg="Module nf_conntrack was already loaded"
time="2021-12-03T09:32:16.947840221Z" level=info msg="Module br_netfilter was already loaded"
time="2021-12-03T09:32:16.947848076Z" level=info msg="Module iptable_nat was already loaded"
time="2021-12-03T09:32:16.950719895Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_max' to 786432"
time="2021-12-03T09:32:16.950734022Z" level=error msg="Failed to set sysctl: open /proc/sys/net/netfilter/nf_conntrack_max: permission denied"
time="2021-12-03T09:32:16.950748369Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400"
time="2021-12-03T09:32:16.950771934Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600"
time="2021-12-03T09:32:16.951266036Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
time="2021-12-03T09:32:16.951365293Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
time="2021-12-03T09:32:17.952660126Z" level=info msg="Containerd is now running"
time="2021-12-03T09:32:17.958507621Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
time="2021-12-03T09:32:17.959890541Z" level=info msg="Handling backend connection request [k3d-k3s-default-server-0]"
time="2021-12-03T09:32:17.960159638Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"
time="2021-12-03T09:32:17.960197921Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --runtime-cgroups=/k3s --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
time="2021-12-03T09:32:17.960577858Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Flag --cni-bin-dir has been deprecated, will be removed along with dockershim.
Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
W1203 09:32:17.960784       7 server.go:220] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
I1203 09:32:17.961100       7 server.go:436] "Kubelet version" kubeletVersion="v1.21.5+k3s2"
W1203 09:32:17.961104       7 proxier.go:653] Failed to read file /lib/modules/5.15.5-xanmod1-tt/modules.builtin with error open /lib/modules/5.15.5-xanmod1-tt/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:17.961712       7 proxier.go:663] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:17.962097       7 proxier.go:663] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:17.962389       7 proxier.go:663] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:17.962662       7 proxier.go:663] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:17.962948       7 proxier.go:663] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
E1203 09:32:17.967085       7 node.go:161] Failed to retrieve node info: nodes "k3d-k3s-default-server-0" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
I1203 09:32:17.972283       7 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt
E1203 09:32:17.972310       7 server.go:288] "Failed to run kubelet" err="failed to run Kubelet: mountpoint for cpu not found"
time="2021-12-03T09:32:18.573446395Z" level=info msg="Starting k3s v1.21.5+k3s2 (724ef700)"
time="2021-12-03T09:32:18.573579356Z" level=info msg="Cluster bootstrap already complete"
time="2021-12-03T09:32:18.581904998Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
time="2021-12-03T09:32:18.581920186Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
time="2021-12-03T09:32:18.581970952Z" level=info msg="Database tables and indexes are up to date"
time="2021-12-03T09:32:18.582678868Z" level=info msg="Kine listening on unix://kine.sock"
time="2021-12-03T09:32:18.582746085Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
I1203 09:32:18.583376       6 server.go:656] external host was not specified, using 172.26.0.2
I1203 09:32:18.583505       6 server.go:195] Version: v1.21.5+k3s2
I1203 09:32:18.585784       6 shared_informer.go:240] Waiting for caches to sync for node_authorizer
I1203 09:32:18.586453       6 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1203 09:32:18.586467       6 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1203 09:32:18.587101       6 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1203 09:32:18.587107       6 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1203 09:32:18.601636       6 instance.go:283] Using reconciler: lease
I1203 09:32:18.629524       6 rest.go:130] the default service ipfamily for this cluster is: IPv4
W1203 09:32:18.888002       6 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:18.894811       6 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:18.897218       6 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:18.901211       6 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:18.902878       6 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:18.906051       6 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources.
W1203 09:32:18.906058       6 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources.
I1203 09:32:18.912941       6 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1203 09:32:18.912951       6 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
time="2021-12-03T09:32:18.925635475Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0"
time="2021-12-03T09:32:18.925692643Z" level=info msg="Waiting for API server to become available"
time="2021-12-03T09:32:18.926014160Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
time="2021-12-03T09:32:18.926355564Z" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --node-status-update-frequency=1m0s --port=0 --profiling=false"
time="2021-12-03T09:32:18.926863693Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
time="2021-12-03T09:32:18.926875836Z" level=info msg="To join node to cluster: k3s agent -s https://172.26.0.2:6443 -t ${NODE_TOKEN}"
time="2021-12-03T09:32:18.927459337Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"
time="2021-12-03T09:32:18.927500264Z" level=info msg="Run: k3s kubectl"
time="2021-12-03T09:32:18.937576830Z" level=info msg="Cluster-Http-Server 2021/12/03 09:32:18 http: TLS handshake error from 127.0.0.1:57166: remote error: tls: bad certificate"
time="2021-12-03T09:32:18.939880437Z" level=info msg="Cluster-Http-Server 2021/12/03 09:32:18 http: TLS handshake error from 127.0.0.1:57174: remote error: tls: bad certificate"
time="2021-12-03T09:32:18.944735099Z" level=info msg="certificate CN=k3d-k3s-default-server-0 signed by CN=k3s-server-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:18 +0000 UTC"
time="2021-12-03T09:32:18.945966102Z" level=info msg="certificate CN=system:node:k3d-k3s-default-server-0,O=system:nodes signed by CN=k3s-client-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:18 +0000 UTC"
time="2021-12-03T09:32:18.965182095Z" level=info msg="Module overlay was already loaded"
time="2021-12-03T09:32:18.965197294Z" level=info msg="Module nf_conntrack was already loaded"
time="2021-12-03T09:32:18.965209216Z" level=info msg="Module br_netfilter was already loaded"
time="2021-12-03T09:32:18.965219285Z" level=info msg="Module iptable_nat was already loaded"
time="2021-12-03T09:32:18.968103578Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400"
time="2021-12-03T09:32:18.968141830Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600"
time="2021-12-03T09:32:18.968192786Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_max' to 786432"
time="2021-12-03T09:32:18.968204448Z" level=error msg="Failed to set sysctl: open /proc/sys/net/netfilter/nf_conntrack_max: permission denied"
time="2021-12-03T09:32:18.968770306Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
time="2021-12-03T09:32:18.968862410Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
time="2021-12-03T09:32:19.970762855Z" level=info msg="Containerd is now running"
time="2021-12-03T09:32:19.977056663Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
time="2021-12-03T09:32:19.978778031Z" level=info msg="Handling backend connection request [k3d-k3s-default-server-0]"
time="2021-12-03T09:32:19.979052499Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"
time="2021-12-03T09:32:19.979096843Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --runtime-cgroups=/k3s --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
time="2021-12-03T09:32:19.979455159Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Flag --cni-bin-dir has been deprecated, will be removed along with dockershim.
Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
W1203 09:32:19.979641       6 server.go:220] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
I1203 09:32:19.979916       6 server.go:436] "Kubelet version" kubeletVersion="v1.21.5+k3s2"
W1203 09:32:19.980090       6 proxier.go:653] Failed to read file /lib/modules/5.15.5-xanmod1-tt/modules.builtin with error open /lib/modules/5.15.5-xanmod1-tt/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:19.980497       6 proxier.go:663] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:19.980720       6 proxier.go:663] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:19.980932       6 proxier.go:663] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:19.981136       6 proxier.go:663] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:19.981350       6 proxier.go:663] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
E1203 09:32:19.984732       6 node.go:161] Failed to retrieve node info: nodes "k3d-k3s-default-server-0" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
I1203 09:32:19.990866       6 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt
E1203 09:32:19.990917       6 server.go:288] "Failed to run kubelet" err="failed to run Kubelet: mountpoint for cpu not found"
time="2021-12-03T09:32:20.792536597Z" level=info msg="Starting k3s v1.21.5+k3s2 (724ef700)"
time="2021-12-03T09:32:20.792689846Z" level=info msg="Cluster bootstrap already complete"
time="2021-12-03T09:32:20.802483799Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
time="2021-12-03T09:32:20.802500650Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
time="2021-12-03T09:32:20.802552468Z" level=info msg="Database tables and indexes are up to date"
time="2021-12-03T09:32:20.803246067Z" level=info msg="Kine listening on unix://kine.sock"
time="2021-12-03T09:32:20.803313324Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
I1203 09:32:20.804068       7 server.go:656] external host was not specified, using 172.26.0.2
I1203 09:32:20.804206       7 server.go:195] Version: v1.21.5+k3s2
I1203 09:32:20.806393       7 shared_informer.go:240] Waiting for caches to sync for node_authorizer
I1203 09:32:20.807175       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1203 09:32:20.807182       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1203 09:32:20.807844       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1203 09:32:20.807850       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1203 09:32:20.821644       7 instance.go:283] Using reconciler: lease
I1203 09:32:20.846286       7 rest.go:130] the default service ipfamily for this cluster is: IPv4
W1203 09:32:21.121182       7 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:21.127971       7 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:21.130317       7 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:21.134255       7 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:21.135909       7 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:21.139070       7 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources.
W1203 09:32:21.139076       7 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources.
I1203 09:32:21.145928       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1203 09:32:21.145935       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
time="2021-12-03T09:32:21.153049926Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0"
time="2021-12-03T09:32:21.153095692Z" level=info msg="Waiting for API server to become available"
time="2021-12-03T09:32:21.155299762Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
time="2021-12-03T09:32:21.156066609Z" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --node-status-update-frequency=1m0s --port=0 --profiling=false"
time="2021-12-03T09:32:21.157583451Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
time="2021-12-03T09:32:21.157618056Z" level=info msg="To join node to cluster: k3s agent -s https://172.26.0.2:6443 -t ${NODE_TOKEN}"
time="2021-12-03T09:32:21.158814133Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"
time="2021-12-03T09:32:21.158930533Z" level=info msg="Run: k3s kubectl"
time="2021-12-03T09:32:21.169507143Z" level=info msg="Cluster-Http-Server 2021/12/03 09:32:21 http: TLS handshake error from 127.0.0.1:57200: remote error: tls: bad certificate"
time="2021-12-03T09:32:21.172175298Z" level=info msg="Cluster-Http-Server 2021/12/03 09:32:21 http: TLS handshake error from 127.0.0.1:57206: remote error: tls: bad certificate"
time="2021-12-03T09:32:21.178178907Z" level=info msg="certificate CN=k3d-k3s-default-server-0 signed by CN=k3s-server-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:21 +0000 UTC"
time="2021-12-03T09:32:21.179659001Z" level=info msg="certificate CN=system:node:k3d-k3s-default-server-0,O=system:nodes signed by CN=k3s-client-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:21 +0000 UTC"
time="2021-12-03T09:32:21.199093476Z" level=info msg="Module overlay was already loaded"
time="2021-12-03T09:32:21.199121289Z" level=info msg="Module nf_conntrack was already loaded"
time="2021-12-03T09:32:21.199136798Z" level=info msg="Module br_netfilter was already loaded"
time="2021-12-03T09:32:21.199151516Z" level=info msg="Module iptable_nat was already loaded"
time="2021-12-03T09:32:21.202299115Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_max' to 786432"
time="2021-12-03T09:32:21.202323562Z" level=error msg="Failed to set sysctl: open /proc/sys/net/netfilter/nf_conntrack_max: permission denied"
time="2021-12-03T09:32:21.202354720Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400"
time="2021-12-03T09:32:21.202382523Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600"
time="2021-12-03T09:32:21.202901542Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
time="2021-12-03T09:32:21.202995810Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
time="2021-12-03T09:32:22.204432080Z" level=info msg="Containerd is now running"
time="2021-12-03T09:32:22.211662745Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
time="2021-12-03T09:32:22.212862770Z" level=info msg="Handling backend connection request [k3d-k3s-default-server-0]"
time="2021-12-03T09:32:22.213158097Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"
time="2021-12-03T09:32:22.213205066Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --runtime-cgroups=/k3s --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
time="2021-12-03T09:32:22.213588439Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Flag --cni-bin-dir has been deprecated, will be removed along with dockershim.
Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
W1203 09:32:22.213813       7 server.go:220] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
I1203 09:32:22.214152       7 server.go:436] "Kubelet version" kubeletVersion="v1.21.5+k3s2"
W1203 09:32:22.214260       7 proxier.go:653] Failed to read file /lib/modules/5.15.5-xanmod1-tt/modules.builtin with error open /lib/modules/5.15.5-xanmod1-tt/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:22.214680       7 proxier.go:663] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:22.214892       7 proxier.go:663] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:22.215103       7 proxier.go:663] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:22.215278       7 proxier.go:663] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:22.215441       7 proxier.go:663] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
E1203 09:32:22.219852       7 node.go:161] Failed to retrieve node info: nodes "k3d-k3s-default-server-0" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
I1203 09:32:22.227749       7 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt
E1203 09:32:22.227801       7 server.go:288] "Failed to run kubelet" err="failed to run Kubelet: mountpoint for cpu not found"

@iwilltry42 iwilltry42 added not a bug Luckily this is not a bug with k3d after all ¯\_(ツ)_/¯ runtime Issue with the container runtime (docker) labels Dec 3, 2021
@iwilltry42
Copy link
Member

iwilltry42 commented Dec 3, 2021

Ah, well there we go: E1203 09:32:22.227801 7 server.go:288] "Failed to run kubelet" err="failed to run Kubelet: mountpoint for cpu not found"
So quick googling of this error brought me from SO to this potential workaround, which you may want to try 👍

EDIT: Actually I could've seen this from your docker info output as below, sorry!

WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
WARNING: No cpu shares support

EDIT 2: In your case it's probably related to this: https://github.com/xanmod/linux/issues/3

@applike-ss
Copy link
Author

Thanks for your help! I wonder if that is still up-to-date because xanmod has moved from cacule to the task type scheduler. Will try it anyway and let you know :)

@applike-ss
Copy link
Author

applike-ss commented Dec 3, 2021

I booted with swapaccount=1, but still get the three warnings that you mentioned from docker info - and k3d still doesn't start (obviously for the same reasons).

cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-5.15.5-xanmod1-tt root=UUID=b5bc341e-7e7f-490e-982f-d7773f56cd0e ro nosplash elevator=noop swapaccount=1 DEEPIN_GFXMODE=

EDIT: The workaround mentioned first also didn't work

@iwilltry42 iwilltry42 modified the milestones: v5.2.0, Backlog Dec 3, 2021
@paul-civitas
Copy link

I'm also affected by this.

time="2021-12-03T22:27:59.196695835Z" level=warning msg="Failed to find cpuset cgroup, you may need to add \"cgroup_enable=cpuset\" to your linux cmdline (/b
oot/cmdline.txt on a Raspberry Pi)"
time="2021-12-03T22:27:59.196725418Z" level=error msg="Failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your lin
ux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
time="2021-12-03T22:27:59.196767460Z" level=fatal msg="failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your lin
ux cmdline (/boot/cmdline.txt on a Raspberry Pi)"

Looks like I picked the wrong day to update docker...

@applike-ss
Copy link
Author

The solution for me was to switch to the linux-xanmod kernel package which uses CFS, so i don't have cacule/tt scheduler now - but k3d is working nicely so far :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working not a bug Luckily this is not a bug with k3d after all ¯\_(ツ)_/¯ runtime Issue with the container runtime (docker)
Projects
None yet
Development

No branches or pull requests

3 participants