Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow Teleport Kubernetes support to be enabled for leaf/trusted clusters without requiring a dummy config on the root/main cluster #3087

webvictim opened this issue Oct 17, 2019 · 1 comment


Copy link

@webvictim webvictim commented Oct 17, 2019

What happened: Currently, there is no way to enable the Kubernetes proxy ports/endpoints in a central Teleport cluster (solely for the purposes of delegating access to Kubernetes apiservers in trusted/leaf clusters) without providing a 'dummy' kubeconfig YAML file which will validate.

If the dummy file is not provided, the listener on port 3026 cannot be started and then tsh login commands will not update your ~/.kube/config file - even if the trusted/leaf cluster you're connecting to DOES actually have Kubernetes support enabled.

What you expected to happen: There should be a way to enable the Kubernetes part of the proxy_service and start the listener on port 3026 without needing to provide a kubeconfig file. It doesn't need to relay any requests - just force tsh login to update ~/.kube/config when logging in to other clusters. Some way to achieve this end goal is what's important.

How to reproduce it (as minimally and precisely as possible):

  • Set up two clusters - one main/root cluster without Kubernetes support enabled and another trusted/leaf cluster with Kubernetes support enabled
  • Connect the trusted cluster to the main cluster
  • Try logging into the main/root cluster and see that ~/.kube/config is not updated
  • Then try logging into the trusted/leaf cluster and see that ~/.kube/config is still not updated despite Kubernetes support being enabled
  • tsh logout
  • Provide dummy kubeconfig to main/root Teleport cluster and restart
  • Try logging into the main/root cluster and see that ~/.kube/config is updated
  • Then try logging into the trusted/leaf cluster and see that ~/.kube/config is now updated correctly

Dummy kubeconfig example:

apiVersion: v1
- cluster:
    server: https://localhost/
    certificate-authority-data: yadayadayada
  name: kubernetes
- context:
    cluster: kubernetes
    user: teleport
  name: teleport
current-context: teleport
kind: Config
preferences: {}
- name: dummy
  user: {}


  • Teleport version (use teleport version): 4.1.1
  • Tsh version (use tsh version): 4.1.1

This comment has been minimized.

Copy link
Contributor Author

@webvictim webvictim commented Nov 1, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
1 participant
You can’t perform that action at this time.