Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot access Windows host services from pod #12654

Closed
sousarmb opened this issue Oct 4, 2021 · 5 comments
Closed

Cannot access Windows host services from pod #12654

sousarmb opened this issue Oct 4, 2021 · 5 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@sousarmb
Copy link

sousarmb commented Oct 4, 2021

minikube start --cpus=2 --memory=4g --network-plugin=cni --cni=calico --disk-size=5g --driver=hyperv --container-runtime=docker

Running minikube - v1.23.2, commit: 0a0ad76 - on Windows 10 Professional, with the following scenario:

I'm trying to access SSH tunnel to use a DB server in another host. The tunnel is set up in the Windows host with Bitwise and listening on 0.0.0.0:1435 .

If i use Docker for Desktop - using host.docker.internal - i can access the DB on the other host without any issues. When i switch to minikube - using host.minikube.internal - i always get a timeout.

I know the Docker for Desktop setup works because i can see connections happenning in the Bitwise log.

@davclark
Copy link

I'm likewise trying to figure this one out. The docs for the minikube drivers say that the target of host.minikube.internal varies by driver, but doesn't provide details.

The below is from the perspective of Minikube run from WSL2 w/ docker driver on Docker Desktop.

From inside the minikube host container (i.e., the one you access via minikube ssh, 192.168.49.2 for me), both host.minikube.internal and host.docker.internal resolve to IP addresses, and I can ping both. The docker hostname resolves to my host AND I can netcat to a "host" process (including Windows processes and WSL2 processes) like nc -vz host.docker.internal. The minikube hostname is live according to ping, but I can't access host processes (nor do I know where that IP address resolves to, 192.168.49.1 for me).

There are a couple ways to address this:

  1. The ideal would be to always have host.minikube.internal route through to the actual host. I know there is a route because Docker Desktop is setting this up.
  2. Another alternative that I'm using now is to configure your host process using the host.docker.internal hostname when using the minikube docker driver. This appears to be a potential solution for OP as well?
  • Ideally I would be able to query a running minikube for this info, is this possible?
  • Alternatively, I can write the driver to a file from a script when I start minikube, and read that value later when configuring k8s (in my case, configuring a CRD - which makes this a bit of a pain).

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 20, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 19, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants