Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fail fast if docker storage driver is btrfs. #7975

Closed
medyagh opened this issue May 2, 2020 · 10 comments
Closed

fail fast if docker storage driver is btrfs. #7975

medyagh opened this issue May 2, 2020 · 10 comments
Labels
co/none-driver kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-solution-message Issues where where offering a solution for an error would be helpful priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@medyagh
Copy link
Member

medyagh commented May 2, 2020

minikube "none" and "docker" driver does not work on btrfs even when you pass, kubeadm.ignore-preflight-errors=SystemVerification
because kubeadm does not allow btfs for docker.

it is better that we detect this fast and fail fast, as opposed to waste user's time.

medyagh@penguin:~$ sudo minikube start --driver=none --extra-config kubeadm.ignore-preflight-errors=SystemVerification --alsologtostderr
I0501 23:34:34.238934   25225 start.go:99] hostinfo: {"hostname":"penguin","uptime":1648,"bootTime":1588399626,"procs":91,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"10.3","kernelVersion":"5.4.35-03273-g0e9023d1e5a3","virtualizationSystem":"","virtualizationRole":"","hostid":"629203ba-896b-3b12-3697-5ec85e829442"}
I0501 23:34:34.241689   25225 start.go:109] virtualization:  
😄  minikube v1.10.0-beta.2 on Debian 10.3
I0501 23:34:34.246274   25225 notify.go:125] Checking for updates...
I0501 23:34:34.246818   25225 driver.go:253] Setting default libvirt URI to qemu:///system
✨  Using the none driver based on user configuration
I0501 23:34:34.257474   25225 start.go:206] selected driver: none
I0501 23:34:34.257487   25225 start.go:579] validating driver "none" against <nil>
I0501 23:34:34.257961   25225 start.go:585] status for none: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
I0501 23:34:34.258114   25225 start_flags.go:217] no existing cluster config was found, will generate one from the flags 
I0501 23:34:34.259587   25225 start_flags.go:231] Using suggested 2200MB memory alloc based on sys=6730MB, container=0MB
I0501 23:34:34.260476   25225 start_flags.go:553] Wait components to verify : map[apiserver:true system_pods:true]
👍  Starting control plane node minikube in cluster minikube
I0501 23:34:34.264375   25225 profile.go:156] Saving config to /root/.minikube/profiles/minikube/config.json ...
I0501 23:34:34.265182   25225 lock.go:35] WriteFile acquiring /root/.minikube/profiles/minikube/config.json: {Name:mk270d1b5db5965f2dc9e9e25770a63417031943 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0501 23:34:34.266109   25225 cache.go:125] Successfully downloaded all kic artifacts
I0501 23:34:34.266185   25225 start.go:223] acquiring machines lock for minikube: {Name:mka00e65579c2b557a802898fd1cf03ec4ab30a1 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0501 23:34:34.266866   25225 start.go:227] acquired machines lock for "minikube" in 661.465µs
I0501 23:34:34.266893   25225 start.go:83] Provisioning new machine with config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: Memory:2200 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.1 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:ignore-preflight-errors Value:SystemVerification}] ShouldLoadCachedImages:false EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP: Port:8443 KubernetesVersion:v1.18.1 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]} {Name:m01 IP: Port:8443 KubernetesVersion:v1.18.1 ControlPlane:true Worker:true}
I0501 23:34:34.267070   25225 start.go:104] createHost starting for "m01" (driver="none")
🤹  Running on localhost (CPUs=6, Memory=6730MB, Disk=14415MB) ...
I0501 23:34:34.274028   25225 exec_runner.go:49] Run: systemctl --version
I0501 23:34:34.284337   25225 start.go:140] libmachine.API.Create for "minikube" (driver="none")
I0501 23:34:34.284999   25225 client.go:161] LocalClient.Create starting
I0501 23:34:34.285137   25225 main.go:110] libmachine: Reading certificate data from /root/.minikube/certs/ca.pem
I0501 23:34:34.285207   25225 main.go:110] libmachine: Decoding PEM data...
I0501 23:34:34.285236   25225 main.go:110] libmachine: Parsing certificate...
I0501 23:34:34.285936   25225 main.go:110] libmachine: Reading certificate data from /root/.minikube/certs/cert.pem
I0501 23:34:34.286222   25225 main.go:110] libmachine: Decoding PEM data...
I0501 23:34:34.286341   25225 main.go:110] libmachine: Parsing certificate...
I0501 23:34:34.288033   25225 client.go:164] LocalClient.Create took 2.959876ms
I0501 23:34:34.288168   25225 start.go:145] duration metric: libmachine.API.Create for "minikube" took 3.889036ms
I0501 23:34:34.288279   25225 start.go:186] post-start starting for "minikube" (driver="none")
I0501 23:34:34.288353   25225 start.go:196] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0501 23:34:34.288573   25225 exec_runner.go:49] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0501 23:34:34.302467   25225 main.go:110] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
ℹ️  OS release is Debian GNU/Linux 10 (buster)
I0501 23:34:34.305417   25225 filesync.go:118] Scanning /root/.minikube/addons for local assets ...
I0501 23:34:34.305550   25225 filesync.go:118] Scanning /root/.minikube/files for local assets ...
I0501 23:34:34.305672   25225 start.go:189] post-start completed in 17.319828ms
I0501 23:34:34.306661   25225 start.go:107] duration metric: createHost completed in 39.570569ms
I0501 23:34:34.306785   25225 start.go:74] releasing machines lock for "minikube", held for 39.846447ms
I0501 23:34:34.307504   25225 profile.go:156] Saving config to /root/.minikube/profiles/minikube/config.json ...
I0501 23:34:34.308003   25225 exec_runner.go:49] Run: sudo systemctl daemon-reload
I0501 23:34:34.308544   25225 exec_runner.go:49] Run: curl -sS -m 2 https://k8s.gcr.io/
I0501 23:34:34.431418   25225 exec_runner.go:49] Run: sudo systemctl start docker
I0501 23:34:34.456266   25225 exec_runner.go:49] Run: docker version --format {{.Server.Version}}
🐳  Preparing Kubernetes v1.18.1 on Docker 19.03.8 ...
I0501 23:34:34.540414   25225 start.go:251] checking
I0501 23:34:34.540505   25225 exec_runner.go:49] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
    ▪ kubeadm.ignore-preflight-errors=SystemVerification
I0501 23:34:34.546689   25225 preload.go:81] Checking if preload exists for k8s version v1.18.1 and runtime docker
I0501 23:34:34.605816   25225 preload.go:113] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v3-v1.18.1-docker-overlay2-amd64.tar.lz4
I0501 23:34:34.606091   25225 exec_runner.go:49] Run: docker images --format {{.Repository}}:{{.Tag}}
I0501 23:34:34.683411   25225 docker.go:379] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.1
k8s.gcr.io/kube-apiserver:v1.18.1
k8s.gcr.io/kube-controller-manager:v1.18.1
k8s.gcr.io/kube-scheduler:v1.18.1
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0

-- /stdout --
I0501 23:34:34.683518   25225 docker.go:384] gcr.io/k8s-minikube/storage-provisioner:v1.8.1 wasn't preloaded
I0501 23:34:34.683652   25225 exec_runner.go:49] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0501 23:34:34.698462   25225 store.go:62] repositories.json doesn't exist: sudo cat /var/lib/docker/image/overlay2/repositories.json: exit status 1
stdout:

stderr:
cat: /var/lib/docker/image/overlay2/repositories.json: No such file or directory
I0501 23:34:34.698653   25225 exec_runner.go:49] Run: which lz4
I0501 23:34:34.702508   25225 kubeadm.go:682] prelaoding failed, will try to load cached images: lz4
I0501 23:34:34.702672   25225 kubeadm.go:124] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet: AdvertiseAddress:100.115.92.200 APIServerPort:8443 KubernetesVersion:v1.18.1 EtcdDataDir:/var/lib/minikube/etcd ClusterName:minikube NodeName:penguin DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "100.115.92.200"]]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:100.115.92.200 ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0501 23:34:34.703038   25225 kubeadm.go:128] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 100.115.92.200
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: "penguin"
  kubeletExtraArgs:
    node-ip: 100.115.92.200
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "100.115.92.200"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.18.1
networking:
  dnsDomain: cluster.local
  podSubnet: ""
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: ""
metricsBindAddress: 100.115.92.200:10249

I0501 23:34:34.703193   25225 exec_runner.go:49] Run: docker info --format {{.CgroupDriver}}
I0501 23:34:34.858213   25225 kubeadm.go:718] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.1/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=penguin --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=100.115.92.200 --pod-manifest-path=/etc/kubernetes/manifests

[Install]
 config:
{KubernetesVersion:v1.18.1 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:ignore-preflight-errors Value:SystemVerification}] ShouldLoadCachedImages:false EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:}
I0501 23:34:34.858462   25225 exec_runner.go:49] Run: sudo ls /var/lib/minikube/binaries/v1.18.1
I0501 23:34:34.876205   25225 binaries.go:46] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.18.1: exit status 2
stdout:

stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.18.1': No such file or directory

Initiating transfer...
I0501 23:34:34.876372   25225 exec_runner.go:49] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.18.1
I0501 23:34:34.891971   25225 binary.go:56] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.18.1/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.18.1/bin/linux/amd64/kubeadm.sha256
I0501 23:34:34.892118   25225 binary.go:56] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.18.1/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.18.1/bin/linux/amd64/kubelet.sha256
I0501 23:34:34.892456   25225 exec_runner.go:49] Run: sudo systemctl is-active --quiet service kubelet
I0501 23:34:34.892121   25225 binary.go:56] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.18.1/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.18.1/bin/linux/amd64/kubectl.sha256
I0501 23:34:34.892151   25225 exec_runner.go:98] cp: /root/.minikube/cache/linux/v1.18.1/kubeadm --> /var/lib/minikube/binaries/v1.18.1/kubeadm (39813120 bytes)
I0501 23:34:34.893997   25225 exec_runner.go:98] cp: /root/.minikube/cache/linux/v1.18.1/kubectl --> /var/lib/minikube/binaries/v1.18.1/kubectl (44027904 bytes)
I0501 23:34:34.911603   25225 exec_runner.go:98] cp: /root/.minikube/cache/linux/v1.18.1/kubelet --> /var/lib/minikube/binaries/v1.18.1/kubelet (113271512 bytes)
I0501 23:34:35.163532   25225 exec_runner.go:49] Run: sudo mkdir -p /var/tmp/minikube /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0501 23:34:35.182297   25225 exec_runner.go:98] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (1447 bytes)
I0501 23:34:35.184135   25225 exec_runner.go:91] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.new, removing ...
I0501 23:34:35.184680   25225 exec_runner.go:98] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.new (535 bytes)
I0501 23:34:35.185205   25225 exec_runner.go:91] found /lib/systemd/system/kubelet.service.new, removing ...
I0501 23:34:35.185669   25225 exec_runner.go:98] cp: memory --> /lib/systemd/system/kubelet.service.new (349 bytes)
I0501 23:34:35.186349   25225 start.go:251] checking
I0501 23:34:35.186713   25225 exec_runner.go:49] Run: grep 100.115.92.200	control-plane.minikube.internal$ /etc/hosts
I0501 23:34:35.190620   25225 exec_runner.go:49] Run: /bin/bash -c "pgrep kubelet && diff -u /lib/systemd/system/kubelet.service /lib/systemd/system/kubelet.service.new && diff -u /etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.new"
I0501 23:34:35.198934   25225 exec_runner.go:49] Run: /bin/bash -c "sudo cp /lib/systemd/system/kubelet.service.new /lib/systemd/system/kubelet.service && sudo cp /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.new /etc/systemd/system/kubelet.service.d/10-kubeadm.conf"
I0501 23:34:35.230385   25225 exec_runner.go:49] Run: sudo systemctl enable kubelet
I0501 23:34:35.346091   25225 exec_runner.go:49] Run: sudo systemctl daemon-reload
I0501 23:34:35.453206   25225 exec_runner.go:49] Run: sudo systemctl start kubelet
I0501 23:34:35.477186   25225 kubeadm.go:788] reloadKubelet took 286.568931ms
I0501 23:34:35.477341   25225 certs.go:52] Setting up /root/.minikube/profiles/minikube for IP: 100.115.92.200
I0501 23:34:35.477673   25225 certs.go:169] skipping minikubeCA CA generation: /root/.minikube/ca.key
I0501 23:34:35.478092   25225 certs.go:169] skipping proxyClientCA CA generation: /root/.minikube/proxy-client-ca.key
I0501 23:34:35.478366   25225 certs.go:267] generating minikube-user signed cert: /root/.minikube/profiles/minikube/client.key
I0501 23:34:35.478527   25225 crypto.go:69] Generating cert /root/.minikube/profiles/minikube/client.crt with IP's: []
I0501 23:34:35.612139   25225 crypto.go:157] Writing cert to /root/.minikube/profiles/minikube/client.crt ...
I0501 23:34:35.612237   25225 lock.go:35] WriteFile acquiring /root/.minikube/profiles/minikube/client.crt: {Name:mk09878e812b07af637940656ec44996daba95aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0501 23:34:35.612597   25225 crypto.go:165] Writing key to /root/.minikube/profiles/minikube/client.key ...
I0501 23:34:35.612649   25225 lock.go:35] WriteFile acquiring /root/.minikube/profiles/minikube/client.key: {Name:mkf3b978f9858871583d8228f83a87a85b7d106f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0501 23:34:35.612993   25225 certs.go:267] generating minikube signed cert: /root/.minikube/profiles/minikube/apiserver.key.8de7bffc
I0501 23:34:35.613038   25225 crypto.go:69] Generating cert /root/.minikube/profiles/minikube/apiserver.crt.8de7bffc with IP's: [100.115.92.200 10.96.0.1 127.0.0.1 10.0.0.1]
I0501 23:34:35.805565   25225 crypto.go:157] Writing cert to /root/.minikube/profiles/minikube/apiserver.crt.8de7bffc ...
I0501 23:34:35.805682   25225 lock.go:35] WriteFile acquiring /root/.minikube/profiles/minikube/apiserver.crt.8de7bffc: {Name:mk7f9270b51a48b4e0bfc8e5c5b8e1df2205f5e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0501 23:34:35.806052   25225 crypto.go:165] Writing key to /root/.minikube/profiles/minikube/apiserver.key.8de7bffc ...
I0501 23:34:35.806108   25225 lock.go:35] WriteFile acquiring /root/.minikube/profiles/minikube/apiserver.key.8de7bffc: {Name:mk7542af7c8858f530459a07867283ed1a65529c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0501 23:34:35.806908   25225 certs.go:278] copying /root/.minikube/profiles/minikube/apiserver.crt.8de7bffc -> /root/.minikube/profiles/minikube/apiserver.crt
I0501 23:34:35.807078   25225 certs.go:282] copying /root/.minikube/profiles/minikube/apiserver.key.8de7bffc -> /root/.minikube/profiles/minikube/apiserver.key
I0501 23:34:35.807168   25225 certs.go:267] generating aggregator signed cert: /root/.minikube/profiles/minikube/proxy-client.key
I0501 23:34:35.807184   25225 crypto.go:69] Generating cert /root/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0501 23:34:35.926186   25225 crypto.go:157] Writing cert to /root/.minikube/profiles/minikube/proxy-client.crt ...
I0501 23:34:35.926286   25225 lock.go:35] WriteFile acquiring /root/.minikube/profiles/minikube/proxy-client.crt: {Name:mkcab3ddb18cd096d978df14d87a44e804896057 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0501 23:34:35.927289   25225 crypto.go:165] Writing key to /root/.minikube/profiles/minikube/proxy-client.key ...
I0501 23:34:35.927305   25225 lock.go:35] WriteFile acquiring /root/.minikube/profiles/minikube/proxy-client.key: {Name:mkaff5bf6f623f02423597918f5f33c2a99a3db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0501 23:34:35.927971   25225 certs.go:342] found cert: /root/.minikube/certs/root/.minikube/certs/ca-key.pem (1675 bytes)
I0501 23:34:35.928065   25225 certs.go:342] found cert: /root/.minikube/certs/root/.minikube/certs/ca.pem (1029 bytes)
I0501 23:34:35.928102   25225 certs.go:342] found cert: /root/.minikube/certs/root/.minikube/certs/cert.pem (1070 bytes)
I0501 23:34:35.928185   25225 certs.go:342] found cert: /root/.minikube/certs/root/.minikube/certs/key.pem (1679 bytes)
I0501 23:34:35.933632   25225 exec_runner.go:98] cp: /root/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1350 bytes)
I0501 23:34:35.934641   25225 exec_runner.go:98] cp: /root/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0501 23:34:35.935210   25225 exec_runner.go:98] cp: /root/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1103 bytes)
I0501 23:34:35.935352   25225 exec_runner.go:98] cp: /root/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0501 23:34:35.935419   25225 exec_runner.go:98] cp: /root/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1066 bytes)
I0501 23:34:35.935602   25225 exec_runner.go:98] cp: /root/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0501 23:34:35.935944   25225 exec_runner.go:98] cp: /root/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1074 bytes)
I0501 23:34:35.936021   25225 exec_runner.go:98] cp: /root/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0501 23:34:35.936122   25225 exec_runner.go:91] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
I0501 23:34:35.936364   25225 exec_runner.go:98] cp: /root/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1066 bytes)
I0501 23:34:35.936684   25225 exec_runner.go:98] cp: memory --> /var/lib/minikube/kubeconfig (398 bytes)
I0501 23:34:35.936939   25225 exec_runner.go:49] Run: openssl version
I0501 23:34:35.942352   25225 exec_runner.go:49] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0501 23:34:35.958691   25225 exec_runner.go:49] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0501 23:34:35.962087   25225 certs.go:383] hashing: -rw-r--r-- 1 root root 1066 May  1 23:34 /usr/share/ca-certificates/minikubeCA.pem
I0501 23:34:35.962193   25225 exec_runner.go:49] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0501 23:34:35.967786   25225 exec_runner.go:49] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0501 23:34:35.981060   25225 kubeadm.go:279] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: Memory:2200 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.1 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:ignore-preflight-errors Value:SystemVerification}] ShouldLoadCachedImages:false EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:100.115.92.200 Port:8443 KubernetesVersion:v1.18.1 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]}
I0501 23:34:35.981243   25225 exec_runner.go:49] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0501 23:34:36.044269   25225 exec_runner.go:49] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0501 23:34:36.058147   25225 exec_runner.go:49] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0501 23:34:36.074816   25225 exec_runner.go:49] Run: docker version --format {{.Server.Version}}
I0501 23:34:36.160495   25225 exec_runner.go:49] Run: sudo /bin/bash -c "grep https://100.115.92.200:8443 /etc/kubernetes/admin.conf || sudo rm -f /etc/kubernetes/admin.conf"
I0501 23:34:36.186479   25225 exec_runner.go:49] Run: sudo /bin/bash -c "grep https://100.115.92.200:8443 /etc/kubernetes/kubelet.conf || sudo rm -f /etc/kubernetes/kubelet.conf"
I0501 23:34:36.214187   25225 exec_runner.go:49] Run: sudo /bin/bash -c "grep https://100.115.92.200:8443 /etc/kubernetes/controller-manager.conf || sudo rm -f /etc/kubernetes/controller-manager.conf"
I0501 23:34:36.248354   25225 exec_runner.go:49] Run: sudo /bin/bash -c "grep https://100.115.92.200:8443 /etc/kubernetes/scheduler.conf || sudo rm -f /etc/kubernetes/scheduler.conf"
I0501 23:34:36.278490   25225 exec_runner.go:49] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.1:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=SystemVerification --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap"



I0501 23:36:35.136171   25225 exec_runner.go:78] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.1:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=SystemVerification --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": (1m58.85753545s)
💥  initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.1:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=SystemVerification --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": exit status 1
stdout:
[init] Using Kubernetes version: v1.18.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.4.35-03273-g0e9023d1e5a3
CONFIG_NAMESPACES: enabled
CONFIG_NET_NS: enabled
CONFIG_PID_NS: enabled
CONFIG_IPC_NS: enabled
CONFIG_UTS_NS: enabled
CONFIG_CGROUPS: enabled
CONFIG_CGROUP_CPUACCT: enabled
CONFIG_CGROUP_DEVICE: enabled
CONFIG_CGROUP_FREEZER: enabled
CONFIG_CGROUP_SCHED: enabled
CONFIG_CPUSETS: enabled
CONFIG_MEMCG: enabled
CONFIG_INET: enabled
CONFIG_EXT4_FS: enabled
CONFIG_PROC_FS: enabled
CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
CONFIG_OVERLAYFS_FS: not set - Required for overlayfs.
CONFIG_AUFS_FS: not set - Required for aufs.
CONFIG_BLK_DEV_DM: enabled
DOCKER_VERSION: 19.03.8
DOCKER_GRAPH_DRIVER: btrfs
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [penguin localhost] and IPs [100.115.92.200 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [penguin localhost] and IPs [100.115.92.200 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'


stderr:
W0501 23:34:36.350828   25456 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING FileExisting-ethtool]: ethtool not found in system path
	[WARNING FileExisting-socat]: socat not found in system path
	[WARNING SystemVerification]: unsupported graph driver: btrfs
W0501 23:34:40.105917   25456 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0501 23:34:40.107151   25456 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

I0501 23:36:35.137213   25225 exec_runner.go:49] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.1:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0501 23:36:35.669517   25225 exec_runner.go:49] Run: sudo systemctl stop -f kubelet
I0501 23:36:35.693458   25225 exec_runner.go:49] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0501 23:36:35.765658   25225 exec_runner.go:49] Run: docker version --format {{.Server.Version}}
I0501 23:36:35.851419   25225 exec_runner.go:49] Run: sudo /bin/bash -c "grep https://100.115.92.200:8443 /etc/kubernetes/admin.conf || sudo rm -f /etc/kubernetes/admin.conf"
I0501 23:36:35.879012   25225 exec_runner.go:49] Run: sudo /bin/bash -c "grep https://100.115.92.200:8443 /etc/kubernetes/kubelet.conf || sudo rm -f /etc/kubernetes/kubelet.conf"
I0501 23:36:35.906020   25225 exec_runner.go:49] Run: sudo /bin/bash -c "grep https://100.115.92.200:8443 /etc/kubernetes/controller-manager.conf || sudo rm -f /etc/kubernetes/controller-manager.conf"
I0501 23:36:35.935653   25225 exec_runner.go:49] Run: sudo /bin/bash -c "grep https://100.115.92.200:8443 /etc/kubernetes/scheduler.conf || sudo rm -f /etc/kubernetes/scheduler.conf"
I0501 23:36:35.969953   25225 exec_runner.go:49] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.1:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=SystemVerification --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap"










I0501 23:38:33.389345   25225 exec_runner.go:78] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.1:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=SystemVerification --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": (1m57.419146188s)
I0501 23:38:33.389807   25225 kubeadm.go:281] StartCluster complete in 3m57.408752565s
I0501 23:38:33.390237   25225 exec_runner.go:49] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0501 23:38:33.454517   25225 logs.go:203] 0 containers: []
W0501 23:38:33.455292   25225 logs.go:205] No container was found matching "kube-apiserver"
I0501 23:38:33.457142   25225 exec_runner.go:49] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0501 23:38:33.535216   25225 logs.go:203] 0 containers: []
W0501 23:38:33.535340   25225 logs.go:205] No container was found matching "etcd"
I0501 23:38:33.535617   25225 exec_runner.go:49] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0501 23:38:33.615194   25225 logs.go:203] 0 containers: []
W0501 23:38:33.615277   25225 logs.go:205] No container was found matching "coredns"
I0501 23:38:33.615337   25225 exec_runner.go:49] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0501 23:38:33.681349   25225 logs.go:203] 0 containers: []
W0501 23:38:33.681424   25225 logs.go:205] No container was found matching "kube-scheduler"
I0501 23:38:33.681488   25225 exec_runner.go:49] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0501 23:38:33.741190   25225 logs.go:203] 0 containers: []
W0501 23:38:33.741344   25225 logs.go:205] No container w

@medyagh medyagh added the needs-solution-message Issues where where offering a solution for an error would be helpful label May 2, 2020
@medyagh medyagh changed the title fail fast on btrfs fail fast if docker storage driver is btrfs. May 2, 2020
@medyagh medyagh added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label May 2, 2020
@afbjorklund
Copy link
Collaborator

Are you sure about this ? It seemed to work when running overlay2-on-btrfs with the docker driver

That is, the outer docker was using btrfs while the inner docker was using overlayfs. Didn't try "none".

@afbjorklund
Copy link
Collaborator

Not sure if it is a real problem for kubernetes either, looked more like a kubeadm "support" thing ?

That is: similar to trying to run a newer docker or something (not in their hard-coded list): #6167

@medyagh
Copy link
Member Author

medyagh commented May 2, 2020

well for none I tried and for sure for none driver it didnt work. (docker didnt work for other reasons before it comes to kube adm)

@afbjorklund
Copy link
Collaborator

Here is what I did, for testing running docker with ntfs and with btrfs.

  1. Create a disk image, and mount it
fallocate -l 20g btrfs.img
mkfs.btrfs btrfs.img
sudo mkdir /mnt/btrfs
sudo mount -t btrfs btrfs.img /mnt/btrfs
  1. Configure docker to use it as root
sudo systemctl stop docker
edit /etc/docker/daemon.json
{
   "graph": "/mnt/btrfs/docker",   
   "storage-driver": "btrfs"
}
sudo systemctl start docker
docker info | grep Driver

Still recommend overlayfs ("overlay2"), this was for troubleshooting.

@sharifelgamal sharifelgamal added the kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. label May 4, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 2, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 1, 2020
@sharifelgamal sharifelgamal removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Sep 9, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 8, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 7, 2021
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/none-driver kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-solution-message Issues where where offering a solution for an error would be helpful priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

5 participants