Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Having issue when start minikube it is showing Exiting due to RUNTIME_ENABLE: which crictl: exit status 1 #16066

Closed
Md-Sadaf opened this issue Mar 16, 2023 · 4 comments
Labels
co/none-driver kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@Md-Sadaf
Copy link

What Happened?

minikube start --vm-driver=none
😄 minikube v1.29.0 on Ubuntu 22.04
✨ Using the none driver based on user configuration
👍 Starting control plane node minikube in cluster minikube
🤹 Running on localhost (CPUs=4, Memory=3824MB, Disk=149597MB) ...
ℹ️ OS release is Ubuntu 22.04.2 LTS

❌ Exiting due to RUNTIME_ENABLE: which crictl: exit status 1
stdout:

stderr:

╭───────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ 😿 If the above advice does not help, please let us know: │
│ 👉 https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. │
│ │
╰───────────────────────────────────────────────────────────────────────────────────────────╯

Attach the log file

minikube logs --file=logs.txt
root@sadaf-Virtual:/home/sadaf# ls
bin cri-dockerd Desktop Documents Downloads installer_linux logs.txt Music Pictures Public snap Templates Videos
root@sadaf-Virtual:/home/sadaf# cat logs.txt
*

  • ==> Audit <==

  • |-----------|----------------------------|----------|------|---------|---------------------|---------------------|
    | Command | Args | Profile | User | Version | Start Time | End Time |
    |-----------|----------------------------|----------|------|---------|---------------------|---------------------|
    | start | --vm-driver=none | minikube | root | v1.29.0 | 15 Mar 23 14:41 IST | |
    | start | --vm-driver=none | minikube | root | v1.29.0 | 15 Mar 23 14:43 IST | |
    | dashboard | | minikube | root | v1.29.0 | 15 Mar 23 15:06 IST | |
    | start | | minikube | root | v1.29.0 | 15 Mar 23 15:06 IST | |
    | start | --vm-driver=none | minikube | root | v1.29.0 | 15 Mar 23 15:08 IST | |
    | delete | | minikube | root | v1.29.0 | 15 Mar 23 15:08 IST | 15 Mar 23 15:08 IST |
    | start | --vm-driver=none | minikube | root | v1.29.0 | 15 Mar 23 15:08 IST | |
    | start | --vm-driver=none | minikube | root | v1.29.0 | 15 Mar 23 15:09 IST | |
    | start | | minikube | root | v1.29.0 | 15 Mar 23 15:09 IST | |
    | start | --force | minikube | root | v1.29.0 | 15 Mar 23 15:09 IST | |
    | start | | minikube | root | v1.29.0 | 15 Mar 23 17:59 IST | |
    | start | --driver=none | minikube | root | v1.29.0 | 15 Mar 23 18:03 IST | |
    | start | --driver=docker | minikube | root | v1.29.0 | 15 Mar 23 18:03 IST | |
    | start | --driver=none | minikube | root | v1.29.0 | 15 Mar 23 18:03 IST | |
    | delete | | minikube | root | v1.29.0 | 15 Mar 23 18:05 IST | 15 Mar 23 18:05 IST |
    | start | | minikube | root | v1.29.0 | 15 Mar 23 18:05 IST | |
    | delete | | minikube | root | v1.29.0 | 15 Mar 23 18:06 IST | 15 Mar 23 18:06 IST |
    | start | --driver=none | minikube | root | v1.29.0 | 15 Mar 23 18:06 IST | |
    | start | --vm-driver=none | minikube | root | v1.29.0 | 15 Mar 23 18:08 IST | |
    | start | --vm-driver=virtualbox | minikube | root | v1.29.0 | 15 Mar 23 18:08 IST | |
    | start | --vm-driver=virtualbox | minikube | root | v1.29.0 | 15 Mar 23 18:11 IST | |
    | start | --vm-driver=none | minikube | root | v1.29.0 | 15 Mar 23 18:11 IST | |
    | start | --vm-driver=docker | minikube | root | v1.29.0 | 15 Mar 23 18:12 IST | |
    | start | --vm-driver=none | minikube | root | v1.29.0 | 15 Mar 23 18:13 IST | |
    | start | | minikube | root | v1.29.0 | 15 Mar 23 18:27 IST | |
    | start | --vm-driver=none | minikube | root | v1.29.0 | 15 Mar 23 18:36 IST | |
    | start | --driver=none | minikube | root | v1.29.0 | 15 Mar 23 18:39 IST | |
    | start | --vm-driver=none | minikube | root | v1.29.0 | 15 Mar 23 20:22 IST | |
    | dashboard | | minikube | root | v1.29.0 | 15 Mar 23 20:25 IST | |
    | start | | minikube | root | v1.29.0 | 15 Mar 23 20:25 IST | |
    | start | | minikube | root | v1.29.0 | 16 Mar 23 19:45 IST | |
    | start | --vm-driver=none | minikube | root | v1.29.0 | 16 Mar 23 19:57 IST | |
    | start | | minikube | root | v1.29.0 | 16 Mar 23 19:57 IST | |
    | start | --vm-driver=docker | minikube | root | v1.29.0 | 16 Mar 23 19:58 IST | |
    | start | | minikube | root | v1.29.0 | 16 Mar 23 19:58 IST | |
    | delete | | minikube | root | v1.29.0 | 16 Mar 23 20:00 IST | 16 Mar 23 20:00 IST |
    | start | --vm-driver=none | minikube | root | v1.29.0 | 16 Mar 23 20:00 IST | |
    | delete | | minikube | root | v1.29.0 | 16 Mar 23 20:00 IST | 16 Mar 23 20:00 IST |
    | start | --vm-driver=none | minikube | root | v1.29.0 | 16 Mar 23 20:00 IST | |
    | start | --vm-driver=none | minikube | root | v1.29.0 | 16 Mar 23 20:08 IST | |
    | start | --vm-driver=none | minikube | root | v1.29.0 | 16 Mar 23 20:08 IST | |
    | start | | minikube | root | v1.29.0 | 16 Mar 23 20:11 IST | |
    | start | --vm-driver=docker --force | minikube | root | v1.29.0 | 16 Mar 23 20:11 IST | |
    | start | | minikube | root | v1.29.0 | 16 Mar 23 20:12 IST | |
    | start | --vm-driver=none | minikube | root | v1.29.0 | 16 Mar 23 21:19 IST | |
    |-----------|----------------------------|----------|------|---------|---------------------|---------------------|

  • ==> Last Start <==

  • Log file created at: 2023/03/16 21:19:17
    Running on machine: sadaf-Virtual
    Binary: Built with gc go1.19.5 for linux/amd64
    Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
    I0316 21:19:17.417326 4367 out.go:296] Setting OutFile to fd 1 ...
    I0316 21:19:17.418372 4367 out.go:348] isatty.IsTerminal(1) = true
    I0316 21:19:17.418414 4367 out.go:309] Setting ErrFile to fd 2...
    I0316 21:19:17.418425 4367 out.go:348] isatty.IsTerminal(2) = true
    I0316 21:19:17.418787 4367 root.go:334] Updating PATH: /root/.minikube/bin
    I0316 21:19:17.427102 4367 out.go:303] Setting JSON to false
    I0316 21:19:17.442764 4367 start.go:125] hostinfo: {"hostname":"sadaf-Virtual","uptime":276,"bootTime":1678981482,"procs":251,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"5.19.0-35-generic","kernelArch":"x86_64","virtualizationSystem":"vbox","virtualizationRole":"host","hostId":"7d6e63a5-ccf3-e040-a115-58bb8b3e3b3c"}
    I0316 21:19:17.443038 4367 start.go:135] virtualization: vbox host
    I0316 21:19:17.457881 4367 out.go:177] 😄 minikube v1.29.0 on Ubuntu 22.04
    I0316 21:19:17.466761 4367 notify.go:220] Checking for updates...
    I0316 21:19:17.471334 4367 driver.go:365] Setting default libvirt URI to qemu:///system
    I0316 21:19:17.483575 4367 out.go:177] ✨ Using the none driver based on user configuration
    I0316 21:19:17.494121 4367 start.go:296] selected driver: none
    I0316 21:19:17.496531 4367 start.go:857] validating driver "none" against
    I0316 21:19:17.496561 4367 start.go:868] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:}
    I0316 21:19:17.497857 4367 start.go:1617] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
    I0316 21:19:17.498230 4367 start_flags.go:305] no existing cluster config was found, will generate one from the flags
    I0316 21:19:17.498486 4367 start_flags.go:386] Using suggested 2200MB memory alloc based on sys=3824MB, container=0MB
    I0316 21:19:17.501052 4367 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
    I0316 21:19:17.503927 4367 cni.go:84] Creating CNI manager for ""
    I0316 21:19:17.503954 4367 cni.go:157] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
    I0316 21:19:17.504012 4367 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
    I0316 21:19:17.504024 4367 start_flags.go:319] config:
    {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/root:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
    I0316 21:19:17.513439 4367 out.go:177] 👍 Starting control plane node minikube in cluster minikube
    I0316 21:19:17.535364 4367 profile.go:148] Saving config to /root/.minikube/profiles/minikube/config.json ...
    I0316 21:19:17.535928 4367 lock.go:35] WriteFile acquiring /root/.minikube/profiles/minikube/config.json: {Name:mk270d1b5db5965f2dc9e9e25770a63417031943 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
    I0316 21:19:17.557113 4367 cache.go:193] Successfully downloaded all kic artifacts
    I0316 21:19:17.557164 4367 start.go:364] acquiring machines lock for minikube: {Name:mkc8ab01ad3ea83211c505c81a7ee49a8e3ecb89 Clock:{} Delay:500ms Timeout:13m0s Cancel:}
    I0316 21:19:17.557362 4367 start.go:368] acquired machines lock for "minikube" in 181.028µs
    I0316 21:19:17.560036 4367 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/root:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m01 IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
    I0316 21:19:17.560208 4367 start.go:125] createHost starting for "m01" (driver="none")
    I0316 21:19:17.570234 4367 out.go:177] 🤹 Running on localhost (CPUs=4, Memory=3824MB, Disk=149597MB) ...
    I0316 21:19:17.577717 4367 exec_runner.go:51] Run: systemctl --version
    I0316 21:19:17.582912 4367 start.go:159] libmachine.API.Create for "minikube" (driver="none")
    I0316 21:19:17.582978 4367 client.go:168] LocalClient.Create starting
    I0316 21:19:17.584107 4367 main.go:141] libmachine: Reading certificate data from /root/.minikube/certs/ca.pem
    I0316 21:19:17.586153 4367 main.go:141] libmachine: Decoding PEM data...
    I0316 21:19:17.586190 4367 main.go:141] libmachine: Parsing certificate...
    I0316 21:19:17.586452 4367 main.go:141] libmachine: Reading certificate data from /root/.minikube/certs/cert.pem
    I0316 21:19:17.588177 4367 main.go:141] libmachine: Decoding PEM data...
    I0316 21:19:17.588256 4367 main.go:141] libmachine: Parsing certificate...
    I0316 21:19:17.592413 4367 client.go:171] LocalClient.Create took 9.423382ms
    I0316 21:19:17.592495 4367 start.go:167] duration metric: libmachine.API.Create for "minikube" took 9.780015ms
    I0316 21:19:17.592503 4367 start.go:300] post-start starting for "minikube" (driver="none")
    I0316 21:19:17.592608 4367 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
    I0316 21:19:17.592639 4367 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
    I0316 21:19:17.626604 4367 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
    I0316 21:19:17.626630 4367 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
    I0316 21:19:17.626639 4367 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
    I0316 21:19:17.637792 4367 out.go:177] ℹ️ OS release is Ubuntu 22.04.2 LTS
    I0316 21:19:17.646883 4367 filesync.go:126] Scanning /root/.minikube/addons for local assets ...
    I0316 21:19:17.648472 4367 filesync.go:126] Scanning /root/.minikube/files for local assets ...
    I0316 21:19:17.650256 4367 start.go:303] post-start completed in 57.739496ms
    I0316 21:19:17.658040 4367 profile.go:148] Saving config to /root/.minikube/profiles/minikube/config.json ...
    I0316 21:19:17.658897 4367 start.go:128] duration metric: createHost completed in 98.673211ms
    I0316 21:19:17.658917 4367 start.go:83] releasing machines lock for "minikube", held for 101.541166ms
    I0316 21:19:17.659269 4367 exec_runner.go:51] Run: cat /version.json
    W0316 21:19:17.661083 4367 start.go:396] Unable to open version.json: cat /version.json: exit status 1
    stdout:

stderr:
cat: /version.json: No such file or directory
I0316 21:19:17.661150 4367 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/loopback.conf"
I0316 21:19:17.662240 4367 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
W0316 21:19:17.675103 4367 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/loopback.conf" not found
I0316 21:19:17.682205 4367 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0316 21:19:17.737240 4367 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0316 21:19:17.744436 4367 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3689035833 /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0316 21:19:17.775196 4367 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name bridge -or -name podman ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0316 21:19:17.799716 4367 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0316 21:19:17.799745 4367 start.go:483] detecting cgroup driver to use...
I0316 21:19:17.799775 4367 detect.go:199] detected "systemd" cgroup driver on host os
I0316 21:19:17.799899 4367 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0316 21:19:17.841364 4367 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( )sandbox_image = .$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
W0316 21:19:17.860745 4367 start.go:450] cannot ensure containerd is configured properly and reloaded for docker - cluster might be unstable: update sandbox_image: sh -c "sudo sed -i -r 's|^( )sandbox_image = .$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml": exit status 2
stdout:

stderr:
sed: can't read /etc/containerd/config.toml: No such file or directory
I0316 21:19:17.860760 4367 start.go:483] detecting cgroup driver to use...
I0316 21:19:17.860785 4367 detect.go:199] detected "systemd" cgroup driver on host os
I0316 21:19:17.860893 4367 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0316 21:19:17.906990 4367 exec_runner.go:51] Run: sudo systemctl unmask docker.service
I0316 21:19:18.282354 4367 exec_runner.go:51] Run: sudo systemctl enable docker.socket
I0316 21:19:18.613276 4367 docker.go:529] configuring docker to use "systemd" as cgroup driver...
I0316 21:19:18.613308 4367 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (143 bytes)
I0316 21:19:18.614003 4367 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1124404421 /etc/docker/daemon.json
I0316 21:19:18.638381 4367 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0316 21:19:18.973989 4367 exec_runner.go:51] Run: sudo systemctl restart docker
I0316 21:19:22.369304 4367 exec_runner.go:84] Completed: sudo systemctl restart docker: (3.395283694s)
I0316 21:19:22.369352 4367 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
I0316 21:19:22.776209 4367 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
I0316 21:19:23.216644 4367 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
I0316 21:19:23.581894 4367 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0316 21:19:23.942861 4367 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
I0316 21:19:23.997008 4367 start.go:530] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0316 21:19:23.997068 4367 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
I0316 21:19:23.999925 4367 start.go:551] Will wait 60s for crictl version
I0316 21:19:23.999983 4367 exec_runner.go:51] Run: which crictl
I0316 21:19:24.010725 4367 out.go:177]
W0316 21:19:24.021662 4367 out.go:239] ❌ Exiting due to RUNTIME_ENABLE: which crictl: exit status 1
stdout:

stderr:

W0316 21:19:24.021701 4367 out.go:239]
W0316 21:19:24.024599 4367 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ 😿 If the above advice does not help, please let us know: │
│ 👉 https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. │
│ │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
I0316 21:19:24.035541 4367 out.go:177]

Operating System

Ubuntu

Driver

None

@Md-Sadaf Md-Sadaf changed the title Having issue when start minikube Having issue when start minikube it is showing Exiting due to RUNTIME_ENABLE: which crictl: exit status 1 Mar 16, 2023
@afbjorklund afbjorklund added co/none-driver kind/support Categorizes issue or PR as a support question. labels Mar 18, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 16, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 16, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/none-driver kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants