Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hyperv: failure: start: exit status 1 (not running minikube as admin?) #5627

Closed
balopat opened this issue Oct 15, 2019 · 6 comments
Closed
Labels
co/hyperv HyperV related issues kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. os/windows priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@balopat
Copy link
Contributor

balopat commented Oct 15, 2019

The exact command to reproduce the issue:

minikube start --vm-driver=hyperv

The full output of the command that failed:

$ minikube start --vm-driver=hyperv
* minikube v1.4.0 on Microsoft Windows 10 Enterprise 10.0.17763 Build 17763
* Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
E1015 10:10:29.795414    2708 cache_images.go:79] CacheImage kubernetesui/dashboard:v2.0.0-beta4 -> C:\Users\balintp\.minikube\cache\images\kubernetesui\dashboard_v2.0.0-beta4 failed: fetching image: unrecognized HTTP status: 503 Service Unavailable
* Starting existing hyperv VM for "minikube" ...
* Retriable failure: start: exit status 1
* Deleting "minikube" in hyperv ...
* Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
* Starting existing hyperv VM for "minikube" ...
* Retriable failure: start: exit status 1
* Deleting "minikube" in hyperv ...
* Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
* Starting existing hyperv VM for "minikube" ...
* Retriable failure: start: exit status 1
* Deleting "minikube" in hyperv ...

The output of the minikube logs command:

n/a

The operating system version:

Microsoft Windows 10 Enterprise 10.0.17763 Build 17763

@balopat
Copy link
Contributor Author

balopat commented Oct 15, 2019

I think the issue might have been with minikube not running as admin...we should detect that somehow and fail a bit more gracefully.

I managed to get it working:

  • remove the minikube VM in the HyperV VM manager
  • stop the VM management service in HyperV
  • remove the ~/.minikube directory (you can't until the VM management service is stopped!)
  • restart minikube in Admin mode ...
    ... although another weird error showed up

PS C:\Windows\system32> minikube start --vm-driver=hyperv
* minikube v1.4.0 on Microsoft Windows 10 Enterprise 10.0.17763 Build 17763
* Downloading VM boot image ...
* Creating hyperv VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
* Preparing Kubernetes v1.16.0 on Docker 18.09.9 ...
* Downloading kubeadm v1.16.0
* Downloading kubelet v1.16.0
* Pulling images ...
* Launching Kubernetes ... 
* Waiting for: apiserver proxy etcd scheduler controller dns
* Done! kubectl is now configured to use "minikube"
minikube :     > minikube-v1.4.0.iso.sha256: 65 B / 65 B [--------------] 100.00% ? p/s 0s    > minikube-v1.4.0.iso: 18.84 MiB / 135.73 MiB [->__________] 13.88% ? p/s ?    > minikube-v1.4.0.iso: 
40.87 MiB / 135.73 MiB [--->________] 30.11% ? p/s ?    > minikube-v1.4.0.iso: 62.34 MiB / 135.73 MiB [----->______] 45.93% ? p/s ?    > minikube-v1.4.0.iso: 84.25 MiB / 135.73 MiB  62.07% 109.08 
MiB p/s ETA 0s    > minikube-v1.4.0.iso: 106.59 MiB / 135.73 MiB  78.53% 109.08 MiB p/s ETA 0    > minikube-v1.4.0.iso: 128.65 MiB / 135.73 MiB  94.78% 109.08 MiB p/s ETA 0    > minikube-v1.4.0.iso: 
135.73 MiB / 135.73 MiB [] 100.00% 127.54 MiB p/s 1s
At line:1 char:1
+ minikube start --vm-driver=hyperv
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (    > minikube-...7.54 MiB p/s 1s:String) [], RemoteException
    + FullyQualifiedErrorId : NativeCommandError

@tstromberg tstromberg changed the title cache image 503 error - minikube keeps retrying hyperv: failure: start: exit status 1 Oct 16, 2019
@tstromberg tstromberg added co/hyperv HyperV related issues os/windows labels Oct 16, 2019
@tstromberg tstromberg changed the title hyperv: failure: start: exit status 1 hyperv: failure: start: exit status 1 (not running minikube as admin?) Oct 17, 2019
@tstromberg tstromberg added kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels Oct 17, 2019
@brainfull
Copy link

I totally agree that minikube needs to fail gracefully. According to me the following file needs a fix urgently https://github.com/kubernetes/minikube/blob/master/cmd/minikube/cmd/start.go .

The code below will delete your minikube VM because of any reason. That means that if for some reason you didn't have enough memory to start minikube VM, IT WILL GET DELETED 5 TIMES. Not only you wait 5 useless retry but also you lose everything you did in your minikube VM. I don't think the code below make any sense. 'minikube start' should never delete the VM. We should explicitly use 'minikube delete' if we ever think the solution is to delete the minikube VM.

start := func() (err error) {
	host, err = cluster.StartHost(api, mc)
	if err != nil {
		out.T(out.Resetting, "Retriable failure: {{.error}}", out.V{"error": err})
		if derr := cluster.DeleteHost(api); derr != nil {
			glog.Warningf("DeleteHost: %v", derr)
		}
	}
	return err
}

if err = retry.Expo(start, 5*time.Second, 3*time.Minute, 3); err != nil {
	exit.WithError("Unable to start VM", err)
}

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 4, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 5, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/hyperv HyperV related issues kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. os/windows priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

5 participants