Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Upgrade request required" when posting port forward #585

Closed
yanivoliver opened this issue Jul 26, 2018 · 12 comments
Closed

"Upgrade request required" when posting port forward #585

yanivoliver opened this issue Jul 26, 2018 · 12 comments

Comments

@yanivoliver
Copy link

When attempting to post a port forward to a pod, for example:

v1 = client.CoreV1Api()
v1.connect_post_namespaced_pod_portforward("pod-name", "namespace-name")

An error is thrown:

*** ApiException: (400)
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'Date': 'Thu, 26 Jul 2018 14:11:26 GMT', 'Content-Length': '139', 'Content-Type': 'application/json'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Upgrade request required","reason":"BadRequest","code":400}

This happens regardless of whether the additional param of 'ports' is passed to the API call.

Environment:

Python 2.7.10 (default, Jul 15 2017, 17:16:57)
[GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.31)] on darwin
Package version: kubernetes (6.0.0)
@slindner05
Copy link

Anyone ever make progress on this or get port-forward to work from within the python client?

@ssharma555
Copy link

I am facing this issue. Any steps to get this working?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 19, 2019
@yanivoliver
Copy link
Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 5, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 3, 2019
@gpkc
Copy link

gpkc commented Sep 25, 2019

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 25, 2019
@micw523
Copy link
Contributor

micw523 commented Sep 25, 2019

This should be tracked to #166
/close

@k8s-ci-robot
Copy link
Contributor

@micw523: Closing this issue.

In response to this:

This should be tracked to #166
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@atmask
Copy link

atmask commented Apr 19, 2021

This still seems to be a relevant issue even after the release of 12.0.0. Was any progress made here?

@trobert2
Copy link

I am also facing this issue. Could it be possible to get some guidance?

@eguven
Copy link

eguven commented Jul 4, 2021

I don't think that this

client.CoreV1Api().connect_post_namespaced_pod_portforward('pod-name', 'namespace-name')

is intended to be used directly. In examples, you wrap it to get a raw socket. Or you can use the create_connection monkey patching option.

@dminca
Copy link

dminca commented Nov 4, 2021

I managed to get this working by waiting for the Pod until it reports Ready state

from kubernetes import client, utils, watch
from kubernetes.client.rest import logger
from kubernetes.stream import portforward
import time

k8s_objects = utils.create_from_yaml(
    k8s_client=self._kubernetes_client,
    yaml_file=filename,
    verbose=True
)
w = watch.Watch()
core_v1 = self._kubernetes_core_v1_client
start_time = time.time()
for event in w.stream(
        func=core_v1.list_namespaced_pod,
        namespace=k8s_objects[0][0].metadata.namespace,
        label_selector="env=integration,tier=backend",
        timeout_seconds=60):
    if event["object"].status.phase == "Running":
        w.stop()
        end_time = time.time()
        logger.info("%s started in %0.2f sec", k8s_objects[0][0].metadata.name, end_time - start_time)
        pf = portforward(
            self._kubernetes_core_v1_client.connect_get_namespaced_pod_portforward,
            name="myapp",
            namespace="default",
            ports=8080
        )
        self._tcp = pf.socket(8080)
        self._tcp.setblocking(True)
        logger.info("Port %s exposed. You can reach it on localhost.", pf.local_ports[8080].port_number)
        return
    if event["type"] == "DELETED":
        logger.debug("%s deleted before it started", k8s_objects[0][0].metadata.name)
        w.stop()
        return

You'll need watch permission on the pods resource

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: watch-permission
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - pods/portforward
  verbs:
  - watch

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests