New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

upgraded requests to aggregated API servers do not work when using ssh tunnels #71808

Open
liggitt opened this Issue Dec 6, 2018 · 3 comments

Comments

Projects
None yet
3 participants
@liggitt
Member

liggitt commented Dec 6, 2018

What happened:

upgrade requests to aggregated APIs (like websocket-based watch requests) on a kube-apiserver using --ssh-user hang

What you expected to happen:

Websocket-based watch requests work properly

How to reproduce it (as minimally and precisely as possible):

  1. Start a kube-apiserver configured to use a node dialer to reach services/pods (e.g. with --ssh-user)
  2. Ran an aggregated API server (like the sample-apiserver) on a node other than the kube-apiserver
  3. Start a websocket-based watch against the aggregated API

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

/kind bug

@liggitt liggitt changed the title from websocket watch of aggregated API does not work with SSH tunnels enabled to websocket watch of aggregated API does not work when using ssh tunnels Dec 6, 2018

@liggitt

This comment has been minimized.

Member

liggitt commented Dec 6, 2018

/sig api-machinery

/cc lavalamp

@liggitt liggitt changed the title from websocket watch of aggregated API does not work when using ssh tunnels to upgraded requests to aggregated API servers do not work when using ssh tunnels Dec 6, 2018

@liggitt

This comment has been minimized.

Member

liggitt commented Dec 6, 2018

The code in question is here:

  • // we need to wrap the roundtripper in another roundtripper which will apply the front proxy headers
    proxyRoundTripper, upgrade, err := maybeWrapForConnectionUpgrades(handlingInfo.restConfig, handlingInfo.proxyRoundTripper, req)
    if err != nil {
    proxyError(w, req, err.Error(), http.StatusInternalServerError)
    return
    }
    proxyRoundTripper = transport.NewAuthProxyRoundTripper(user.GetName(), user.GetGroups(), user.GetExtra(), proxyRoundTripper)
    // if we are upgrading, then the upgrade path tries to use this request with the TLS config we provide, but it does
    // NOT use the roundtripper. Its a direct call that bypasses the round tripper. This means that we have to
    // attach the "correct" user headers to the request ahead of time. After the initial upgrade, we'll be back
    // at the roundtripper flow, so we only have to muck with this request, but we do have to do it.
    if upgrade {
    transport.SetAuthProxyHeaders(newReq, user.GetName(), user.GetGroups(), user.GetExtra())
    }
    handler := proxy.NewUpgradeAwareHandler(location, proxyRoundTripper, true, upgrade, &responder{w: w})

handlingInfo.proxyRoundTripper is built here:

and inherits the proxyDialerFn built in CreateNodeDialer from the following construction chain:

However, maybeWrapForConnectionUpgrades drops the custom dialer contained inside handlingInfo.proxyRoundTripper and handlingInfo.restConfig from the returned round tripper.

That means that upgrade requests fall back to use the default dialer, which cannot reach the aggregated API server

@lavalamp

This comment has been minimized.

Member

lavalamp commented Dec 6, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment