Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl port-forward should allow forwarding to a Service #15180

Closed
thockin opened this issue Oct 6, 2015 · 34 comments · Fixed by #59809

Comments

@thockin
Copy link
Member

commented Oct 6, 2015

No description provided.

@bgrant0607

This comment has been minimized.

Copy link
Member

commented Oct 7, 2015

Currently only accepts pods:
https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/cmd/portforward.go#L78

We could accept resource/name and "resource name" and assume pods by default.

@feiskyer

This comment has been minimized.

Copy link
Member

commented Oct 7, 2015

There are a little problem on service port-forwarding: service is not binding to a specific node, so it should be scheduled to some node first. But there is no such a scheduler existing now.

@abourget

This comment has been minimized.

Copy link

commented May 25, 2016

Couldn't it pick a random node ? They all route to services, don't they ? Perhaps have an option to select a particular node ?

@erictune

This comment has been minimized.

Copy link
Member

commented Sep 12, 2016

I want this for Charts. For example, if you have a chart that has one mysql server in it, and a service to give it a stable name, I don't want to have to keep figuring out the pod name.

Server side would be resolution of service to pod would be idea, but client side would also be a big improvement.

@eljefedelrodeodeljefe

This comment has been minimized.

Copy link
Contributor

commented Feb 3, 2017

This would be helpful for (MongoDB) StatefulSets as well. Currently connecting to one of it's pod has odd behaviours.

@cmoad

This comment has been minimized.

Copy link

commented Feb 5, 2017

As a solution in the near-term, we grab the first pod name using a template and pass that into the port-forward command.

Example:
kubectl --namespace logger port-forward $(kubectl --namespace logger get pod -l service=kibana -o template --template="{{(index .items 0).metadata.name}}") 5601:5601

@Analect

This comment has been minimized.

Copy link

commented Feb 10, 2017

@cmoad
I am port-forwarding kibana, as you show above, which was working for me yesterday fine ... I was able to view and interact with developer tools and run various REST API calls.

However, today ... I get this error on the main kibana screen ...
Kibana did not load properly. Check the server output for more information.

... and the developer tools will no longer render.

image

Are you aware of anything that might limit what gets forwarded via kubectl foward and why all assets don't appear to load. Screenshot above is on a chrome incognito window.

I came across this on the ES forum, but not sure if it's relevant in this case.
https://discuss.elastic.co/t/kibana-did-not-load-properly/64867

Thanks.

@cmoad

This comment has been minimized.

Copy link

commented Feb 10, 2017

@Analect I believe kibana has the option of making Elasticsearch calls on the server running kibana or on your client machine. If it is configured on the client machine, then you need to also have access to the Elasticsearch server from your client. I can't think of any other reason why it wouldn't work. We use kibana exactly as you show and Elasticsearch calls are made from the server running kibana.

@Analect

This comment has been minimized.

Copy link

commented Feb 11, 2017

@cmoad ...thanks .. was able to resolve by removing the .kibana index. Not sure what the problem was.

@kargakis

This comment has been minimized.

Copy link
Member

commented May 7, 2017

@shahidhk

This comment has been minimized.

Copy link

commented May 7, 2017

We are looking at this issue as it is a pressing use case for us. Currently we use port-forward to a pod, but when the pod restarts, the tunnel breaks. I'm thinking of possible ways to implement a persistent tunnel to a service.

Taking service name as an argument and getting​ corresponding pod based on that to forward the port seems like the easiest one to implement. Restarts and multiple pods have to be handled though.

What about the kubectl proxy command? I'm quite confused since this one is more closer to proxying services to the client machine. Would adding port support here makes more sense than extending port-forward? Something like kubectl proxy namespace/ui --port 8080:8080

A proposal was created for the same at #36819, but is closed as of now.

Long time k8s user, but new to the codebase. Sorry if I'm missing something out

@JeanMertz

This comment has been minimized.

Copy link

commented May 24, 2017

This would definitely be a nice feature to have. We currently have to do things like:

kubectl port-forward $(kubectl get pod -o=name -lcomponent=... | awk -F/ '{print $2}' | head -1) PORT

to get a single pod. It would be better (and also more resilient, since the above command always takes the first pod, no matter the state) to go through a service instead.

And as @shahidhk stated, obviously having the benefits of keeping a connection alive, even when a pod restarts would be a great feature to have.

@abourget

This comment has been minimized.

Copy link

commented May 26, 2017

This is an everyday feature, I use that as often as get pods .. any hack would be appreciated !

@julianvmodesto

This comment has been minimized.

Copy link
Contributor

commented Aug 31, 2017

@deads2k what was the result of your related work in OpenShift mentioned in #36819? Is it still related and likely the best solution for this use case?

@daggerok

This comment has been minimized.

Copy link

commented Oct 23, 2017

yeah, it would be nice to have service port forwarding.. these days with minikube when minikube service web-app or minikube service web-app --url doesn't work, I'm using pods port forwarding:

terminal:

cat web-app-pod.yml
# ...
metadata:
  name: web-app
# ...

export WEBAPP_POD=$(kubectl get pods -n $NAMESPACE | grep web-app | awk '{print $1;}')
kubectl port-forward -n $NAMESPACE $WEBAPP_POD 8080

browser / client:

open http://localhost:8080

Regards,
Maksim

@mediafreakch

This comment has been minimized.

Copy link

commented Nov 30, 2017

It was never mentioned here but was exactly what I was looking for: https://docs.giantswarm.io/guides/accessing-services-from-the-outside/#api-access

Accessing a random pod through the service via URL.

First, simply use kubectl proxy --port 8002.
Then you can access your apps like this: http://localhost:8002/api/v1/proxy/namespaces/NAMESPACE/services/SERVICE_NAME:SERVICE_PORT/

@e1senh0rn

This comment has been minimized.

Copy link

commented Dec 7, 2017

@mediafreakch Proxy can do for HTTP services, but it won't work for raw TCP/UDP (postgres, powerdns, etc).

@shiywang

This comment has been minimized.

Copy link
Member

commented Dec 20, 2017

cc @vfreex

@jbeda

This comment has been minimized.

Copy link
Contributor

commented Dec 20, 2017

Proxy also does deep content inspection to rewrite URLs. This is appropriate for thinks like the UI but will lead to surprises for others use cases: https://github.com/kubernetes/apimachinery/blob/18a564baac720819100827c16fdebcadb05b2d0d/pkg/util/proxy/transport.go#L71

FWIW, I have a side project that is super early to implement this client side. If it proves out I'll release it and we can look to perhaps move it into kubectl.

@vfreex

This comment has been minimized.

Copy link
Contributor

commented Dec 22, 2017

IMO it is more reasonable to implement this feature in port-forward not proxy.
The syntax would be like kubectl -s SERVICE [LOCAL_PORT]:[SERVICE_PORT] [...]
-p option is deprecated for specifying the pod name, probably it should be brought back.

@phsiao

This comment has been minimized.

Copy link
Contributor

commented Feb 10, 2018

My attempt to support this is in PR #59705, would appreciate comments and reviews.

@phsiao

This comment has been minimized.

Copy link
Contributor

commented Feb 11, 2018

Want to bring #59733 to the attention of people who are interested in this issue.

k8s-github-robot pushed a commit that referenced this issue Feb 13, 2018
Kubernetes Submit Queue
Merge pull request #59705 from phsiao/15180_port_forward_with_resourc…
…e_name

Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

kubectl port-forward allows using resource name to select a matching pod

**What this PR does / why we need it**:

#15180 describes use cases that port-foward should use resource name for selecting a pod.

**Which issue(s) this PR fixes**:

Add support so resource/name can be used to select a pod.

**Special notes for your reviewer**:

I decided to reuse `AttachablePodForObject` to select a pod using resource name, and extended it to support Service (which it did not).   I think that should not be a problem, and may help improve attach's use case.  If it makes more sense to fork the function I'd be happy to do so.  The practice of waiting for pods to become ready is also copied over.

In keeping the change to minimal, I also decided to resolve pod from resource name in Complete(), following the pattern in attach.

**Release note**:

```release-note
kubectl port-forward now allows using resource name (e.g., deployment/www) to select a matching pod, as well as allows the use of --pod-running-timeout to wait till at least one pod is running.
kubectl port-forward no longer support deprecated -p flag
```
k8s-github-robot pushed a commit that referenced this issue Feb 16, 2018
Kubernetes Submit Queue
Merge pull request #59809 from phsiao/59733_port_forward_with_target_…
…port

Automatic merge from submit-queue (batch tested with PRs 59809, 59955). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

kubectl port-forward should resolve service port to target port

**What this PR does / why we need it**:

Continues on the work in #59705, this PR adds additional support for looking up targetPort for a service, as well as enable using svc/name to select a pod.

**Which issue(s) this PR fixes**:
Fixes #15180
Fixes #59733

**Special notes for your reviewer**:

I decided to create pkg/kubectl/util/service_port.go to contain two functions that might be re-usable.

**Release note**:
```release-note
`kubectl port-forward` now supports specifying a service to port forward to: `kubectl port-forward svc/myservice 8443:443`
```
@josdotso

This comment has been minimized.

Copy link

commented Mar 22, 2018

@phsiao. Thanks! Will your solution allow cluster admins to scope port forwarding abilities using RBAC?

@josdotso

This comment has been minimized.

Copy link

commented Mar 22, 2018

Just a note: There is a lot of synergy between this issue and #43962

@phsiao

This comment has been minimized.

Copy link
Contributor

commented Mar 22, 2018

@josdotso the implementation did not change how it connects to the api, so the same RBAC rule should continue to work. The change would require permission to look up pods given a resource name, so that needs to be adjusted as needed.

@Ascendance

This comment has been minimized.

Copy link

commented Mar 25, 2018

would love to have this feature 👍

@emadolsky

This comment has been minimized.

Copy link

commented Jul 22, 2018

I assume a very nice idea in this context (while we are port-forwarding to a service) is to watch on the pod we have chosen to forward to and when it goes down, try to connect to another ready replica.
Now we will choose a pod without checking its readiness and after connecting if it gets terminated, there is no failover.

@mxxk

This comment has been minimized.

Copy link

commented Aug 5, 2018

@phsiao thanks for enabling service-level port forwarding in your PR #59809. However, it seems that kubectl port-forward svc/my-service only forwards traffic to one pod, even when the service has multiple endpoints. Is this the intended behavior? It seems a bit confusing, as it is different from how services load-balance between multiple pods.

By contrast, using kubectl proxy as described by @mediafreakch it is possible to reach different pods, as expected.

All of this is in Kubernetes v1.9:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.5", GitCommit:"32ac1c9073b132b8ba18aa830f46b77dcceb0723", GitTreeState:"clean", BuildDate:"2018-06-21T11:46:00Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.7-gke.3", GitCommit:"9b5b719c5f295c99de68ffb5b63101b0e0175376", GitTreeState:"clean", BuildDate:"2018-05-31T18:32:23Z", GoVersion:"go1.9.3b4", Compiler:"gc", Platform:"linux/amd64"}
@phsiao

This comment has been minimized.

Copy link
Contributor

commented Aug 6, 2018

@mxxk port-forward was originally designed to attach to one pod for debugging and other purpose, the improvement in #59809 was really to help with service discovery so you don't need to lookup the pod name first if you don't care which one you attach to if you have multiple qualifying ones. port-forward does more than just http connections, so the idea of application level load balancing does not apply and multiple active endpoints are not supported. There is a separate feature request for re-attach if the active pod terminates, but there has not been work on it.

@mxxk

This comment has been minimized.

Copy link

commented Aug 7, 2018

Thanks for clarifying @phsiao. Perhaps this point can be further explained in the help page for the port forward command? The current wording

# Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the service
kubectl port-forward service/myservice 5000 6000
)

simply says "a pod selected by the service", but it would be great if it explicitly states that one pod selected by the service is chosen and all traffic is forwarded there for the entire lifetime of the port forward command 🙂

@lostpebble

This comment has been minimized.

Copy link

commented Jan 24, 2019

Oh my... I have been smacking my head against the wall for the past few hours wondering why on earth, when I am port-forwarding to a service, I am only sending traffic to a single pod.

Dug through all service documentation to see if I had suddenly gone crazy and didn't know how to configure a simple service correctly.

Please, for the sake of all future head smackers, make it way more clear that port-forwarding to a service still actually is using a single pod - and does not function as one might expect.

@kfox1111

This comment has been minimized.

Copy link

commented Jan 24, 2019

Yeah. and worse, the port forward is not re-evaluated when the pod dies. so, do a rolling upgrade and the port forward breaks. FYI.

Don't get me wrong, I'm very happy that you no longer have to go hunt for a pod name to do the port forward. :) but there are some remaining issues with the current implementation.

@lostpebble

This comment has been minimized.

Copy link

commented Jan 24, 2019

Yea, I think the easiest way to prevent the confusion is to actually just output a simple message to the console when starting the port-forward command when it includes a service. Something like:

port-forward allows the use of services for convenience purposes only. Behind the scenes connects to a single pod directly. Connection will be dropped should this pod die.

Would have saved a lot of time 😅

Edit: Should maybe include deployments too, as it could be assumed that port-forwarding to a deployment makes use of some kind of load distribution as well.

@darenjacobstellic

This comment has been minimized.

Copy link

commented Jul 12, 2019

So, having read this, is there any way to get persistence of port-forward through the connected pod dropping?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
You can’t perform that action at this time.