Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1 #8163

Closed
jenkins-kiwi opened this issue May 15, 2020 · 22 comments
Assignees
Labels
co/docker-driver Issues related to kubernetes in container kind/support Categorizes issue or PR as a support question. needs-solution-message Issues where where offering a solution for an error would be helpful priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. top-10-issues Top 10 support issues triage/needs-information Indicates an issue needs more information in order to work on it.
Milestone

Comments

@jenkins-kiwi
Copy link

Steps to reproduce the issue:

Full output of failed command:

Full output of minikube start command used, if not already included:

Optional: Full output of minikube logs command:

@medyagh
Copy link
Member

medyagh commented May 15, 2020

Thank you for sharing your experience! If you don't mind, could you please provide:

  • The exact command-lines used, so that we may replicate the issue
  • The version of minikube.
  • The driver you are using.
  • The Operation system you are using.
  • The output of the "minikube logs"

This will help us isolate the problem further. Thank you!

@medyagh medyagh added triage/needs-information Indicates an issue needs more information in order to work on it. kind/support Categorizes issue or PR as a support question. labels May 15, 2020
@AhireSwati
Copy link

I too am getting the same issue.
`stderr:
Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil

  • Restarting existing docker container for "minikube" ...
  • Failed to start docker container. "minikube start" may fix it: provision: get ssh host-port: get port 22 for "minikube": docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
    stdout:
    `

@tstromberg tstromberg changed the title 😿 Failed to start docker container. "minikube start" may fix it: provision: get ssh host-port: get port 22 for "minikube": docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1 docker: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1 May 28, 2020
@tstromberg tstromberg changed the title docker: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1 docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1 May 28, 2020
@tstromberg
Copy link
Contributor

If someone runs into this, can they please share the output of:

docker inspect minikube

Thank you.

@j75
Copy link

j75 commented May 28, 2020

  • The exact command-lines used, so that we may replicate the issue => minikube start --driver='docker'
  • The version of minikube : minikube version: v1.10.1
    commit: 63ab801
  • The driver you are using: docker
  • The Operation system you are using: Debian Buster (10.04)
  • The output of the "minikube logs" => 🤷 The control plane node must be running for this command 👉 To fix this, run: "minikube start"

However, debian logs <exited container> displays that:

 INFO: remounting /sys read-only
 INFO: making mounts shared
 INFO: fix cgroup mounts for all subsystems
 INFO: clearing and regenerating /etc/machine-id
 Initializing machine ID from random generator.
 INFO: faking /sys/class/dmi/id/product_name to be "kind"
 INFO: setting iptables to detected mode: legacy
 update-alternatives: error: no alternatives for iptables
 INFO: ensuring we can execute /bin/mount even with userns-remap
 INFO: remounting /sys read-only
 INFO: making mounts shared
 INFO: fix cgroup mounts for all subsystems
 INFO: clearing and regenerating /etc/machine-id
 Initializing machine ID from random generator.
 INFO: faking /sys/class/dmi/id/product_name to be "kind"
 INFO: setting iptables to detected mode: legacy
 update-alternatives: error: no alternatives for iptables ```

@j75
Copy link

j75 commented May 28, 2020

If someone runs into this, can they please share the output of:
docker inspect minikube

[
    {
        "Id": "f8e6c1bf16b2a0d351906ab6d7612aabddfd054a7472ca4f2377d06ea699562a",
        "Created": "2020-05-28T20:04:59.482457358Z",
        "Path": "/usr/local/bin/entrypoint",
        "Args": [
            "/sbin/init"
        ],
        "State": {
            "Status": "running",
            "Running": true,
...

So, it works now! What did I do? The only thing is I remember (after several docker rm and trying again minikube start) is minikube delete then performing again minikube start!

@jabolina
Copy link

Was facing the same issue after a fresh minikube install. Only a minikube delete did not solve for me as @j75, I did a docker system prune, minikube delete and then minikube start --driver=docker and it worked fine.

@medyagh medyagh removed this from the v.1.12.0-previous candidate (dumpster fire) milestone Jun 1, 2020
@downlz
Copy link

downlz commented Jun 8, 2020

I performed a cleanup using "minikube delete" and then started using
"minikube start --driver=docker"
it worked

@tstromberg tstromberg assigned priyawadhwa and unassigned medyagh Jun 12, 2020
@tstromberg
Copy link
Contributor

If anyone runs into this, could they please provide the output of:

  • docker inspect minikube
  • docker logs minikube

@tstromberg tstromberg added this to the v1.12.0 milestone Jun 12, 2020
@priyawadhwa
Copy link

Still trying to reproduce this issue. I tried seeing if #8203 was related by forcing docker to start in systemd without waiting for containerd, but still wasn't able to repro the error

@priyawadhwa
Copy link

I have a feeling this might be related to #8179, since the logs from the failed container provided in this comment #8163 (comment) are the same

@medyagh
Copy link
Member

medyagh commented Jul 1, 2020

this is an issue that cloud code team has faced too, we should provide better solution message before exiting, or provide better logs

@medyagh medyagh added the needs-solution-message Issues where where offering a solution for an error would be helpful label Jul 1, 2020
@medyagh
Copy link
Member

medyagh commented Jul 1, 2020

I beleive this happens when minikube tried to create a container, but then docker failed but on a second start there is stuck container, that minikube can not create on top of it.

currently if users specify "--delete-on-failure" as this PR #8628 it will fix the problem.

however we could detect that this is not a recover-able state and we should just delete it for them. even if they don't specify this flag.
this would require some extra care or maybe a prompt from the user. to confirm the delete (if they dont have a any interesting data inside the dead container)

The current work-around:

restart docker and ensure it is running
minikube delete
minikube start

@dadav
Copy link

dadav commented Jul 5, 2020

Was facing the same issue after a fresh minikube install. Only a minikube delete did not solve for me as @j75, I did a docker system prune, minikube delete and then minikube start --driver=docker and it worked fine.

this was the solution for me!! thx alot

@priyawadhwa
Copy link

Hey @dadav glad it's working for you now -- could you please provide the output of minikube version ?

@medyagh medyagh closed this as completed Jul 8, 2020
@medyagh
Copy link
Member

medyagh commented Jul 8, 2020

dupe of var race condition

@manoj10kumar
Copy link

manoj10kumar commented Jan 12, 2021

getting error
😄 minikube v1.16.0 on Ubuntu 20.04 (xen/amd64)
✨ Using the docker driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🔄 Restarting existing docker container for "minikube" ...
🤦 StartHost failed, but will try again: provision: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil

✋ Stopping node "minikube" ...
Version: v1.16.0
done "system prune, minikube delete and then minikube start --driver=docker" multiple times.

@jverce
Copy link

jverce commented Jan 19, 2021

/reopen

I'm hitting this error whenever I restart my computer with Minikube running.
After it restarts, minikube start fails with this error: https://gist.github.com/jverce/f7243519d2515d1b1490ca5ef573335f

Some additional context:

@k8s-ci-robot
Copy link
Contributor

@jverce: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

I'm hitting this error whenever I restart my computer with Minikube running.
After it restarts, minikube start fails with this error: https://gist.github.com/jverce/f7243519d2515d1b1490ca5ef573335f

Some additional context:

minikube version: v1.16.0
commit: 9f1e482427589ff8451c4723b6ba53bb9742fbb1-dirty

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@jverce
Copy link

jverce commented Jan 19, 2021

I was able to start the cluster without tearing it down.

These are the steps that I followed:

  1. Re-create the Minikube network that the existing Minikube cluster is using (note: you might need to disconnect whichever containers are currently connected to that network)
$ docker network rm minikube
$ docker network create minikube
  1. Connect your existing Minikube containers to the new network (you might want to automate this if you have many nodes in your cluster)
$ docker network connect minikube minikube
$ docker network connect minikube minikube-m02
$ docker network connect minikube minikube-m03
  1. Build minikube from source in order to use the new --network flag. This flag is not available in the latest release, and it's needed. Otherwise, minikube start will attempt to create a network and it will fail with the errors above.
  2. Run the following command to start your cluster (there can be more flags depending on each specific case, I'm just showing the relevant one):
$ minikube start --network=minikube

@tharun208
Copy link
Contributor

tharun208 commented Mar 30, 2021

@medyagh this issue is not causing scheduled-stop windows test to fail.
ci link - https://github.com/kubernetes/minikube/runs/2223462792

@monegim
Copy link

monegim commented Apr 2, 2021

In my case, removing minikube container docker rm minikube and minikube start solved the problem.

@vbode
Copy link

vbode commented Aug 4, 2022

I have been able to reproduce this consistently in the following scenario:

  • We start minikube from inside a docker container. The docker container is then connected to the same minikube docker network.
  • We then stop the minikube, and stop and remove the docker container.
  • We then start a new docker container, and join this container to the minikube docker network.
  • Starting minikube from this new container now leads to the error mentioned in this issue.

The workaround mentioned in this comment works for us:
#8163 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/docker-driver Issues related to kubernetes in container kind/support Categorizes issue or PR as a support question. needs-solution-message Issues where where offering a solution for an error would be helpful priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. top-10-issues Top 10 support issues triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests