Skip to content

Latest commit

 

History

History
155 lines (113 loc) · 4.78 KB

sharing-clusters.md

File metadata and controls

155 lines (113 loc) · 4.78 KB

WARNING WARNING WARNING WARNING WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

If you are using a released version of Kubernetes, you should refer to the docs that go with that version.

The latest 1.0.x release of this document can be found [here](http://releases.k8s.io/release-1.0/docs/user-guide/sharing-clusters.md).

Documentation for other releases can be found at releases.k8s.io.

Sharing Cluster Access

Client access to a running Kubernetes cluster can be shared by copying the kubectl client config bundle (.kubeconfig). This config bundle lives in $HOME/.kube/config, and is generated by cluster/kube-up.sh. Sample steps for sharing kubeconfig below.

1. Create a cluster

$ cluster/kube-up.sh

2. Copy kubeconfig to new host

$ scp $HOME/.kube/config user@remotehost:/path/to/.kube/config

3. On new host, make copied config available to kubectl

  • Option A: copy to default location
$ mv /path/to/.kube/config $HOME/.kube/config
  • Option B: copy to working directory (from which kubectl is run)
$ mv /path/to/.kube/config $PWD
  • Option C: manually pass kubeconfig location to .kubectl
# via environment variable
$ export KUBECONFIG=/path/to/.kube/config

# via commandline flag
$ kubectl ... --kubeconfig=/path/to/.kube/config

Manually Generating kubeconfig

kubeconfig is generated by kube-up but you can generate your own using (any desired subset of) the following commands.

# create kubeconfig entry
$ kubectl config set-cluster $CLUSTER_NICK \
    --server=https://1.1.1.1 \
    --certificate-authority=/path/to/apiserver/ca_file \
    --embed-certs=true \
    # Or if tls not needed, replace --certificate-authority and --embed-certs with
    --insecure-skip-tls-verify=true \
    --kubeconfig=/path/to/standalone/.kube/config

# create user entry
$ kubectl config set-credentials $USER_NICK \
    # bearer token credentials, generated on kube master
    --token=$token \
    # use either username|password or token, not both
    --username=$username \
    --password=$password \
    --client-certificate=/path/to/crt_file \
    --client-key=/path/to/key_file \
    --embed-certs=true \
    --kubeconfig=/path/to/standalone/.kubeconfig

# create context entry
$ kubectl config set-context $CONTEXT_NAME --cluster=$CLUSTER_NICKNAME --user=$USER_NICK

Notes:

  • The --embed-certs flag is needed to generate a standalone kubeconfig, that will work as-is on another host.
  • --kubeconfig is both the preferred file to load config from and the file to save config too. In the above commands the --kubeconfig file could be omitted if you first run
$ export KUBECONFIG=/path/to/standalone/.kube/config
  • The ca_file, key_file, and cert_file referenced above are generated on the kube master at cluster turnup. They can be found on the master under /srv/kubernetes. Bearer token/basic auth are also generated on the kube master.

For more details on kubeconfig see kubeconfig-file.md, and/or run kubectl config -h.

Merging kubeconfig Example

kubectl loads and merges config from the following locations (in order)

  1. --kubeconfig=path/to/.kube/config commandline flag
  2. KUBECONFIG=path/to/.kube/config env variable
  3. $PWD/.kubeconfig
  4. $HOME/.kube/config

If you create clusters A, B on host1, and clusters C, D on host2, you can make all four clusters available on both hosts by running

# on host2, copy host1's default kubeconfig, and merge it from env
$ scp host1:/path/to/home1/.kube/config path/to/other/.kube/config

$ export $KUBECONFIG=path/to/other/.kube/config

# on host1, copy host2's default kubeconfig and merge it from env
$ scp host2:/path/to/home2/.kube/config path/to/other/.kube/config

$ export $KUBECONFIG=path/to/other/.kube/config

Detailed examples and explanation of kubeconfig loading/merging rules can be found in kubeconfig-file.md.

Analytics