Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error. Unable to start minikube in my Ubuntu #14687

Closed
susshma018 opened this issue Aug 1, 2022 · 7 comments
Closed

Error. Unable to start minikube in my Ubuntu #14687

susshma018 opened this issue Aug 1, 2022 · 7 comments
Labels
co/docker-driver Issues related to kubernetes in container lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. os/linux triage/duplicate Indicates an issue is a duplicate of other open issue.

Comments

@susshma018
Copy link

What Happened?

Error. Unable to start minikube in my Ubuntu.
I have attached the log file for reference

Attach the log file

logs.txt

Operating System

Ubuntu

Driver

Docker

@susshma018
Copy link
Author

Hey, @susshma018 since k8s v1.24 you will need to change your container runtime from Docker Engine to something else Can you try to run with: minikube start --nodes 1 -p minikube --container-runtime=containerd

I did try. But still I am getting the following error;
Exiting due to RUNTIME_ENABLE: unknown network plugin:

@afbjorklund
Copy link
Collaborator

@afbjorklund afbjorklund added co/docker-driver Issues related to kubernetes in container os/linux triage/duplicate Indicates an issue is a duplicate of other open issue. labels Aug 1, 2022
@mesketh
Copy link

mesketh commented Sep 12, 2022

Same issue

---8<-------

  • ==> Last Start <==
  • Log file created at: 2022/09/12 12:08:55
    Running on machine: markx1
    Binary: Built with gc go1.18.3 for linux/amd64
    Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
    I0912 12:08:55.207032 1552433 out.go:296] Setting OutFile to fd 1 ...
    I0912 12:08:55.207148 1552433 out.go:348] isatty.IsTerminal(1) = true
    I0912 12:08:55.207151 1552433 out.go:309] Setting ErrFile to fd 2...
    I0912 12:08:55.207155 1552433 out.go:348] isatty.IsTerminal(2) = true
    I0912 12:08:55.207240 1552433 root.go:333] Updating PATH: /home/mark/.minikube/bin
    I0912 12:08:55.207553 1552433 out.go:303] Set.ting JSON to false
    I0912 12:08:55.232156 1552433 start.go:115] hostinfo: {"hostname":"markx1","uptime":218779,"bootTime":1662729757,"procs":321,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"5.15.0-46-generic","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"0cdb2f8c-ebdc-46f8-b26c-f3aaf240514a"}
    I0912 12:08:55.232223 1552433 start.go:125] virtualization: kvm host
    I0912 12:08:55.235875 1552433 out.go:177] 😄 minikube v1.26.1 on Ubuntu 22.04
    I0912 12:08:55.238100 1552433 notify.go:193] Checking for updates...
    I0912 12:08:55.239057 1552433 config.go:180] Loaded profile config "minikube": Driver=none, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
    I0912 12:08:55.239176 1552433 driver.go:365] Setting default libvirt URI to qemu:///system
    I0912 12:08:55.240159 1552433 exec_runner.go:51] Run: systemctl --version
    I0912 12:08:55.271983 1552433 out.go:177] ✨ Using the none driver based on existing profile
    I0912 12:08:55.273427 1552433 start.go:284] selected driver: none
    I0912 12:08:55.273451 1552433 start.go:808] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:192.168.86.235 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/mark:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
    I0912 12:08:55.273621 1552433 start.go:819] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:}
    I0912 12:08:55.273658 1552433 start.go:1544] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
    I0912 12:08:55.292763 1552433 cni.go:95] Creating CNI manager for ""
    I0912 12:08:55.292771 1552433 cni.go:149] Driver none used, CNI unnecessary in this configuration, recommending no CNI
    I0912 12:08:55.292791 1552433 start_flags.go:310] config:
    {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:192.168.86.235 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/mark:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
    I0912 12:08:55.294588 1552433 out.go:177] 👍 Starting control plane node minikube in cluster minikube
    I0912 12:08:55.296179 1552433 profile.go:148] Saving config to /home/mark/.minikube/profiles/minikube/config.json ...
    I0912 12:08:55.296398 1552433 cache.go:208] Successfully downloaded all kic artifacts
    I0912 12:08:55.296419 1552433 start.go:371] acquiring machines lock for minikube: {Name:mk5f9b86c28b826cc11ad7f97be3d412517886cb Clock:{} Delay:500ms Timeout:13m0s Cancel:}
    I0912 12:08:55.296599 1552433 start.go:375] acquired machines lock for "minikube" in 152.922µs
    I0912 12:08:55.296617 1552433 start.go:95] Skipping create...Using existing machine configuration
    I0912 12:08:55.296623 1552433 fix.go:55] fixHost starting: m01
    W0912 12:08:55.297376 1552433 none.go:130] unable to get port: "minikube" does not appear in /home/mark/.kube/config
    I0912 12:08:55.297398 1552433 api_server.go:165] Checking apiserver status ...
    I0912 12:08:55.297457 1552433 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.minikube.
    W0912 12:08:55.319931 1552433 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: exit status 1
    stdout:

stderr:
I0912 12:08:55.319988 1552433 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0912 12:08:55.332543 1552433 fix.go:103] recreateIfNeeded on minikube: state=Stopped err=
W0912 12:08:55.332566 1552433 fix.go:129] unexpected machine state, will restart:
I0912 12:08:55.334407 1552433 out.go:177] 🔄 Restarting existing none bare metal machine for "minikube" ...
I0912 12:08:55.339368 1552433 profile.go:148] Saving config to /home/mark/.minikube/profiles/minikube/config.json ...
I0912 12:08:55.339788 1552433 start.go:307] post-start starting for "minikube" (driver="none")
I0912 12:08:55.339810 1552433 start.go:335] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0912 12:08:55.340009 1552433 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0912 12:08:55.358979 1552433 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0912 12:08:55.359057 1552433 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0912 12:08:55.359075 1552433 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0912 12:08:55.362343 1552433 out.go:177] ℹ️ OS release is Ubuntu 22.04.1 LTS
I0912 12:08:55.363769 1552433 filesync.go:126] Scanning /home/mark/.minikube/addons for local assets ...
I0912 12:08:55.363833 1552433 filesync.go:126] Scanning /home/mark/.minikube/files for local assets ...
I0912 12:08:55.363863 1552433 start.go:310] post-start completed in 24.059413ms
I0912 12:08:55.363873 1552433 fix.go:57] fixHost completed within 67.250199ms
I0912 12:08:55.363880 1552433 start.go:82] releasing machines lock for "minikube", held for 67.271011ms
I0912 12:08:55.364759 1552433 exec_runner.go:51] Run: curl -sS -m 2 https://k8s.gcr.io/
I0912 12:08:55.366595 1552433 out.go:177]
W0912 12:08:55.368185 1552433 out.go:239] ❌ Exiting due to RUNTIME_ENABLE: unknown network plugin:
W0912 12:08:55.368228 1552433 out.go:239]

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 11, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 10, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Feb 9, 2023
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/docker-driver Issues related to kubernetes in container lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. os/linux triage/duplicate Indicates an issue is a duplicate of other open issue.
Projects
None yet
Development

No branches or pull requests

5 participants