Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proxy for easier access to NodePort services #38

Closed
philips opened this issue May 3, 2016 · 46 comments
Closed

Proxy for easier access to NodePort services #38

philips opened this issue May 3, 2016 · 46 comments
Labels
area/networking networking issues kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. r/2019q2 Issue was last reviewed 2019q2

Comments

@philips
Copy link
Contributor

philips commented May 3, 2016

One of the major hurdles people have using k8s as a development platform is having easy access to DNS and uncomplicated access to "localhost" ports.

This might be something we can tackle in this bug, I discussed the idea here: coreos/coreos-kubernetes#444

Option 1 - Fancy Proxy

This is an idea to make working with the single-node cluster easier. The basic idea would be to have something like kubectl port-forward that forwards every nodePort to localhost based on the original targetPort. So, for example:

# User does something like this
$ kubectl run --image quay.io/philips/golang-outyet outyet
$ kubectl expose deployment outyet --target-port=8080 --port=8080 --type=NodePort


# This is the part that needs automating: 
$ socat tcp-listen:8080,reuseaddr,fork tcp:172.17.4.99:$(kubectl get service outyet -o template --template="{{range.spec.ports}}{{.nodePort}}{{end}}")

This would be a huge boon to people trying to use kubernetes as a development workflow for running caches, services, etc and developing against those APIs.

Psuedo code event loop:

for {
  for _, e := range kubernetesServiceEvent() {
    if e == newNodePort {
      go proxy(context, e.NodePort, e.NodeIP, "localhost", e.TargetIP)
    } 
    if e == dyingNodePort {
      contexts[e.NodePort].Done()
    }
  } 
}

Option 2 - VPN

Having a simple VPN setup would allow a user to get access to cluster DNS and cluster networking. The downside here is that stuff like OpenVPN, etc is a major hurdle. Anyone know of a simple VPN in Go that works cross platform?

@philips philips changed the title vpn or proxy solution vpn or proxy for easier local development May 3, 2016
@ghost
Copy link

ghost commented May 4, 2016

Networking is really not my area, but it really would be great to be able to run a container I'm working on in my development environment in the network context of the k8s cluster.

@philips
Copy link
Contributor Author

philips commented May 4, 2016

I could move this to a proposal on the main kubernetes repo as well. Just a random idea and there isn't really a chat/mailing list for this repo. :)

@ghost
Copy link

ghost commented May 4, 2016

Yeah, working in the context of my Google Container Engine cluster would be fantastic too. I'd definitely support that.

@keithballdotnet
Copy link

+1 👯

@vishh
Copy link
Contributor

vishh commented Jul 2, 2016

+1. I'd say we should make cluster local services also accessible from the host; essentially hide the fact that a VM is running for the end user. It is after all a local cluster.

@ae6rt
Copy link

ae6rt commented Jul 24, 2016

Where I work, we're working to make pod IPs routable, as something like Project Calico affords. One consequence of this is that we have removed NodePort bits from our Service definitions, and I'd rather not have to reintroduce those bits because I don't want such differences between "development" Service descriptions and "production" Service descriptions.

For this single-node k8s cluster manifested by minikube, is there a way to make the Pod IPs accessible from the developer workstation?

@dlorenc dlorenc added kind/feature Categorizes issue or PR as related to a new feature. and removed kind/feature Categorizes issue or PR as related to a new feature. kind/enhancement labels Aug 11, 2016
jimmidyson added a commit to jimmidyson/minikube that referenced this issue Sep 2, 2016
@ram-argus
Copy link

for Option 2: "GoVPN is simple free software virtual private network daemon, aimed to be reviewable, secure, DPI/censorship-resistant, written on Go."

@yuvipanda
Copy link
Contributor

I've futzed around a solution for doing this right now, with VirtualBox's host only networking + adding a static route on my host machine. Here's what I had to do:

  1. minikube start
  2. minikube stop
  3. open vbox manager
  4. find the 'minikube' VM
  5. click settings on the minikube VM
  6. under networks, pick 'adapter 3' (2 adapters will already be used)
  7. Select 'host only', open advanced, check 'cable connected'
  8. hit ok
  9. minikube start
  10. sudo ip route delete 172.17.0.0/16 (make sure you don't have docker running on your host)
  11. sudo ip route add 172.17.0.0/16 via $(minikube ip)

I have written it up using VBoxManage instead of the GUI at https://github.com/yuvipanda/jupyterhub-kubernetes-spawner/blob/master/SETUP.md, which is the project I'm working on :)

Haven't tested on OS X yet.

@pieterlange
Copy link

pieterlange commented Mar 5, 2017

Just stumbled on this issue and wanted to mention that i made my own solution last year. It covers some of the usecases described in the ticket.

it really would be great to be able to run a container I'm working on in my development environment in the network context of the k8s cluster.

This was the original pretext - fast local development within a kubernetes context. There's also an optional feature to route Service traffic back to VPN clients.
I'm also starting to use this as a PtP link between the kubernetes platform and some legacy applications that i can't (yet?) move. So far so good.

I did base it on openvpn as there's broad platform support and community knowledge on the subject (easier to adapt to specific needs). Take a look: https://github.com/pieterlange/kube-openvpn

@antoineco
Copy link

Since minikube is meant to run in a local environment, on a single VM, I like the approach suggested by @yuvipanda (static local route) much better than the VPN idea for the following reasons:

  • you can reach services on their actual IP (10.0.0.x) and port
    • no extra network
    • no port forwarding
  • no extra users to create
  • no agent to install

This is only acceptable in a local environment, which is the main purpose of minikube anyway.

And yes it does work on macOS as well. cf. http://stackoverflow.com/a/42658974/4716370

@andyp1per
Copy link

+1

@ghost
Copy link

ghost commented Jun 2, 2017

Could this be solved by running flanneld on both the developer host and the minikube VM?
I am trying to validate this option. Has anybody tried it?

@antoineco
Copy link

antoineco commented Jun 2, 2017

Flannel would give you a route to your pod network but:

  • it requires running flanneld inside the minikube VM as well to guarantee non-overlapping subnets
  • it doesn't add routes for the Services (--service-cluster-ip-range) and there is no iptables on macOS, so I think kube-proxy is not an option

@itamarst
Copy link

itamarst commented Jul 5, 2017

Not built-in, but Telepresence will let you get VPN-like access to your minikube (or any Kubernetes cluster).

@elsonrodriguez
Copy link

elsonrodriguez commented Aug 18, 2017

This is more related to #384 and #950, however that was closed, and some people here might find this handy.

https://gist.github.com/elsonrodriguez/add59648d097314d2aac9b3c8931278b

Basically I've made a one-liner to add the ClusterIP as a route on OSX, and also made a small custom controller to enable LoadBalancer support for minikube (crudely).

If there's any interest I can polish up the controller/docs.

tl;dr

#etcd behavior changed
#sudo route -n add -net $(minikube ssh  --  sudo docker run -i --rm --net=host quay.io/coreos/etcd:v3.2 etcdctl  get /registry/ranges/serviceips  | jq -r '.range') $(minikube ip)
sudo route -n add -net $(cat ~/.minikube/profiles/minikube/config.json | jq -r ".KubernetesConfig.ServiceCIDR") $(minikube ip)

kubectl run nginx --image=nginx --replicas=1
kubectl expose deployment nginx --port=80 --target-port=80 --type=LoadBalancer

kubectl run minikube-lb-patch --replicas=1 --image=elsonrodriguez/minikube-lb-patch:0.1 --namespace=kube-system

nginx_external_ip=$(kubectl get svc nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
curl $nginx_external_ip

cc @yuvipanda @tony-kerz @waprin @r2d4 @whitecolor

EDIT: updated to remove etcd in the determining of the service ip range.

@metametadata
Copy link

metametadata commented Oct 24, 2017

I tried Telepresence. It does the trick actually. But somehow it made running my unit test suite much slower (tests run from MacOS and connect to resources in Minikube via clusterIP services). I suspect there's some slowdown when talking over VPN to PostgreSQL running inside the minikube (which uses VirtualBox driver).

I didn't investigate further and switched to route add/dnsmasq method from http://stackoverflow.com/a/42658974/4716370. Seems to work too so far and test suite is fast again. But now I also occasionally hit the bug #1710. Not yet sure if there's correlation though.

@ursuad
Copy link

ursuad commented Oct 24, 2017

I'm just putting my own setup here in case someone will find it useful. This will make your minikube pod and service IPs routable from your host.
If you only want service IPs, you can edit this accordingly.

Environment

  • minikube k8s version: v1.7.5
  • minikube version: v0.22.3
  • OS: macOS High Sierra 10.13 ( should work for linux as well, but instead of sudo route -n you would use sudo ip route in the script)

Steps

  1. Stop the minikube VM in case it's started
    $ minikube stop
  2. Go to the virtualbox GUI ( steps 2. and 3. are needed because of VM eth0 fails unpredictably #1710 )
  • right click on the minikube VM -> Settings -> Network
  • for Adapter 1 and Adapter 2 select Advanced and change the adapter type to something other that Intel. I have PCne-FAST III (Am79C973)
  1. Open the minikube config file which should be here: ~/.minikube/machines/minikube/config.json and change the value for the fields: NatNicType and HostOnlyNicType to match the ones that you set in virtualbox in the previous step . In this case, I have Am79C973
  2. Put this script somewhere, name it setup_minikube_routing.sh and make it executable:
#!/bin/bash
set -e

MINIKUBEIP=`minikube ip`

echo "Minikube ip is $MINIKUBEIP"

# clean up the routes
sudo route -n delete 172.17.0.0/16
sudo route -n delete 10.0.0.0/24

# Add the routes
sudo route -n add 172.17.0.0/16 $MINIKUBEIP
sudo route -n add 10.0.0.0/24 $MINIKUBEIP

sudo mkdir -p /etc/resolver

cat << EOF | sudo tee /etc/resolver/cluster.local
nameserver 10.0.0.10
domain cluster.local
search_order 1
EOF
  1. Stop docker on your machine as the ip ranges that you're adding routes for might overlap with the docker ones.

  2. Start minikube and wait for it to start
    $ minikube start

  3. Run the script to set up the routes
    $ ./setup_minikube_routing.sh

Test if everything works

  1. Create an my-nginx.yaml file for an nginx deployment and a service
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: my-nginx
  labels:
    app: nginx
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: my-nginx
  labels:
    app: my-nginx
spec:
  ports:
    # The port that this service should serve on.
    - port: 80
  # Label keys and values that must match in order to receive traffic for this service.
  selector:
    app: nginx
  1. Submit that to k8s
    $ kubectl create -f my-nginx.yaml

  2. Wait for everything to be running and:

  • test accessing the service directly
    curl http://my-nginx.default.svc.cluster.local:80
    where default is the name of the namespace you deployed nginx in.
  • test pinging the pod IP
$ kubectl get po -o wide -l app=nginx
NAME                        READY     STATUS    RESTARTS   AGE       IP           NODE
my-nginx-2302942331-h9h85   1/1       Running   0          1h        172.17.0.6   minikube
$ ping 172.17.0.6
PING 172.17.0.6 (172.17.0.6): 56 data bytes
64 bytes from 172.17.0.6: icmp_seq=0 ttl=63 time=0.284 ms
64 bytes from 172.17.0.6: icmp_seq=1 ttl=63 time=0.399 ms
--- 172.17.0.6 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.284/0.342/0.399/0.057 ms

Caveats and improvements

  • the IP ranges are hardcoded in the script. We should get those dinamically
  • minikube needs to be running for the script to run successfully
  • make sure that the routes you're adding don't overlap with some local routing that you have
  • If you have problems with a pod unable to reach itself through a service, you can run this

@metametadata
Copy link

@ursuad have you managed to make it work with v0.24.1? I'm trying to make it work and so far it looks like 10.96.0.0/12 should be used instead of 10.0.0.0/24 and the new dns server ip is 10.96.0.10.

@tstromberg
Copy link
Contributor

Question: does the new minikube tunnel command resolve this issue? If not, do you mind describing your use case a bit more? Thanks!

@metametadata
Copy link

According to docs, minikube tunnel "creates a route to services deployed with type LoadBalancer". But in my case there're no LoadBalancer services, only NodePort ones.

In the end I want to access internal Kubernetes DNS hostnames in the host OS. So that, say, in Safari browser I could navigate to Minio admin at http://xxx.yyy.svc.cluster.local:zzzz/minio. It would be also handy to have access to the same IPs as in the cluster (i.e. ranges 10.96.0.0/12 and maybe 172.17.0.0/16), but it's not something I personally need anymore really.

I posted a script (#38 (comment)) I used for my case (but note that it doesn't work with Hyperkit driver).

@tstromberg tstromberg added the priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. label Jan 24, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 29, 2019
@burdiyan
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 29, 2019
@tstromberg tstromberg added the r/2019q2 Issue was last reviewed 2019q2 label May 22, 2019
@tstromberg tstromberg changed the title vpn or proxy for easier local development Proxy for easier access to NodePort services May 22, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 20, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 19, 2019
@metametadata
Copy link

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Sep 20, 2019
@burdiyan
Copy link

Just FYI: another VPN written in Go: https://github.com/xitongsys/pangolin

I still dream about the day when I could run kubectl connect or something like that, and access the cluster from a local machine as if I were on the same network :)

@brainfull
Copy link

brainfull commented Nov 6, 2019

Here is how I do it, using port proxy rules available in Windows to establish SSH connection to a Nodeport service. My setup is Hyper-V on Windows 10 Pro. I hope it gives you some food for thought as a minimum.

  1. Use internal VM Switch. You can set it up easily with the following powershell. It will take care of creating the VM switch if it doesn't exist and establish ICS between your internet connection and the internal VM Switch.
    Set-ICS.ps1.txt
    Open Powershell and call the script. In the following example it creates a VM Switch named 'minikube':
    ./Set-ICS.ps1 -VMSwitch minikube Enabled

  2. Create your minikube VM. Open Powershell and call the following command. In the following example it creates a VM named 'minikube' using the VM switch named 'minikube':
    minikube start --vm-driver hyperv --hyperv-virtual-switch minikube

  3. From that point on, your VM 'minikube' is available internally in your computer under the hostname (VM Name).mshome.net, if you followed the previous instructions that is 'minikube.mshome.net'. It is ICS DHCP server that takes care of defining that hostname under C:\Windows\System32\drivers\etc\hosts.ics

  4. Expose a service on a predefined Nodeport. Here is an example of a yaml that expose the port 22 of a container on Nodeport 30022, if you followed the previous instructions that is 'minikube.mshome.net:30022'. In my case this is an Open-SSH listening on port 22 so it allows me to SSH in my container.
    dev-service-bekno-worker-debug.yaml.txt

  5. Then you can open the port on your laptop which has its own external IP address and own external hostname on your network. One way to do it in Powershell is the following:
    netsh interface portproxy add v4tov4 listenaddress=127.0.0.1 listenport=2222 connectaddress=minikube.mshome.net connectport=30022

  6. F**k yeah! In my case I can open SSH connection on port 2222 from another computer. That opens up an SSH connection to a container within minikube!!! You may have to change your firewall rules to allow incoming connection on port 2222. If the port 2222 or 30022 are not available because of other services running on it, the previous steps may fail, in which case you need to change the ports.

I hope it gets you to a working solution for your setup. Definitely there is a lack of support about minikube on Windows. But I am committed to use it since it allows for greater productivity overall.

Have a look at this issue if you wonder why I use an internal VM Switch #5072 .

@burdiyan
Copy link

Slack just open sourced their global overlay network solution https://slack.engineering/introducing-nebula-the-open-source-global-overlay-network-from-slack-884110a5579

It’s a mix of IPSec and TincVPN, but simpler, faster and written in Go. This maybe useful for creating an overlay network between the host and Minikube pods.

@tstromberg
Copy link
Contributor

@burdiyan - interesting news!

If someone wants to make this work, let me know. Help wanted!

@burdiyan
Copy link

I was not really using minikube for a while, and checked it out again recently. And I discovered the existence of minikube tunnel command, which exactly solves the issue being discussed here, I think. I'm using it with macOS using Hyperkit driver and it all works perfectly. I ditched Docker for Mac and just use the docker daemon inside the minikube VM.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 16, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 15, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking networking issues kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. r/2019q2 Issue was last reviewed 2019q2
Projects
None yet
Development

No branches or pull requests