Skip to content
This repository has been archived by the owner on May 3, 2022. It is now read-only.

RFC: shipperctl admin interface #20

Closed
kanatohodets opened this issue Sep 25, 2018 · 7 comments
Closed

RFC: shipperctl admin interface #20

kanatohodets opened this issue Sep 25, 2018 · 7 comments
Assignees

Comments

@kanatohodets
Copy link
Contributor

Now that we're working to simplify setup, administration, and scripting, I think it makes sense to re-open the discussion around a shipperctl CLI for shipper.

I'd like to start with how we can simplify Shipper's setup and administration, so focusing on shipperctl admin:

shipperctl admin init

Create YAML manifests for shipper, shipper-state-metrics, and the Shipper CRDs. These would include:

  • Deployment objects for each with a pinned version of Shipper
  • Service account for Shipper with appropriate Role and Rolebinding
Args
  • -n/--namespace namespace to run Shipper in, defaults shipper-system
  • -i/--install to apply the manifests immediately to the cluster
  • --kube-config same as kubectl

shipperctl admin cluster register $name $cluster-api-url

Create YAML manifest for a new Application cluster object.

Args
  • -n/--namespace shipper system namespace
  • -i/--install apply the manifests directly instead of spitting out to disk
  • --kube-config same as kubectl
  • --region region name for the new cluster

shipperctl admin cluster prepare $name $cluster-api-url

Create YAML manifests for the given application cluster:

  • Shipper namespace
  • service account
  • role / rolebinding
Args
  • -n/--namespace shipper system namespace in both clusters
  • -i/--install apply the manifests directly instead of spitting out to disk
  • --kube-config same as kubectl
  • --region region name for the new cluster

shipperctl admin cluster join $name $cluster-api-url

Combine 'register' and 'prepare --install' into a single command. This will create the namespace, service account, and role/role binding on the application cluster. Then:

Create YAML manifests for:

  • Shipper Cluster object
  • Shipper-formatted service account Secret object for this cluster (type: Opaque, etc.)
Args
  • -n/--namespace shipper system namespace in both clusters
  • -i/--install apply the manifests directly instead of spitting out to disk
  • --kube-config same as kubectl
  • --region region name for the new cluster
  • --insecure whether to set --insecureTLSVerify (and thus enable Kubernetes by Docker for desktop)
@parhamdoustdar
Copy link
Contributor

parhamdoustdar commented Sep 26, 2018 via email

@ksurent
Copy link
Contributor

ksurent commented Sep 26, 2018

Have you considered turning Shipper into an operator?

I don't know if it'd be actually feasible given that we're not using Operator SDK. But my understanding of operators tells me that this is a fitting use case.

@kanatohodets
Copy link
Contributor Author

(we discussed the operator idea in person; it feels like it might be overkill for now -- also some potential bootstrapping problems, but we haven't dug into it carefully)

I think maybe the way to go here is to define some simple config file for a shipperctl admin join-clusters cluster management command to consume. Like:

managementClusters:
- name: my-cool-management-cluster
   url: $kube_api_endpoint

applicationClusters:
- name: new-k8s-canary
   region: eu-west1
   zone: a
   capabilities:
   - ipv6
   - k8s-1.12 # now that I'm writing this, perhaps capabilities should be KV instead of just K
   url: $kube_api_endpoint 
   # optionally: weight (for hashing/app placement); hashKey (for very fine-grained control over app placement)
- name: production-europe-a
  region: eu-west1
  zone: a
  capabilities:
  - gdpr
  - ipv4
  - ssd_local_disk
- name: production-us-a
  region: us-east2
  zone: b
  capabilities:
  - ipv4
  - ssd_local_disk

So you could give that manifest to shipperctl admin join-clusters -f clusters.yaml (or something), and it would start working through the work described in commands above: create cluster objects for management cluster, create shipper-system namespace and Shipper RBAC/service accounts for application clusters, copy service account tokens into Secrets in management cluster, etc.

All of that should be dry-run-able, and it should print out what its doing along the way.

Thoughts? One thing I like about this is that we could provide a cluster config file that works for Minikube/Docker for desktop's K8s support out of the box, so the experience of setting up Shipper to play with it or dev on it becomes much simpler.

@parhamdoustdar
Copy link
Contributor

I really like the idea of a config file like this, especially if we make the command idempotent. That way adding new clusters is as simple as modifying the config file and running the command again.

@isutton
Copy link
Contributor

isutton commented Oct 8, 2018

One idea to make it easier to bootstrap development locally would be slap a context key in both managementClusters and applicationClusters that would overwrite the url if specified.

It would look like the following (change docker-for-desktop to minikube when on Minikube):

managementClusters:
- name: docker-for-desktop
  context: docker-for-desktop
  url: ~
applicationClusters:
- name: docker-for-desktop
  context: docker-for-desktop
  region: eu-west1
  capabilities: {}
  url: ~

@isutton
Copy link
Contributor

isutton commented Oct 8, 2018

We (@icanhazbroccoli, @isutton, @parhamdoustdar) had an offline gathering to discuss what we would include in the initial implementation of shipperctl (although we haven't committed to any particular CLI specification as of yet).

The discussion revolved around @kanatohodets's idea of a cluster configuration file, and how it should be used. This is important since it'll be used in the quick-start guide once implemented.

One of the topics was the need for the url field in the cluster configuration file; having only the url might not be sufficient to connect to multiple clusters: certificates and other artifacts might be required for successful communication with Kubernetes clusters.

We've preferred to, instead of having a url field in either managementClusters and applicationClusters items, consider the name field a context present in the operator's ~/.kube/config file. The reason for this is that both Minikube and Docker for Desktop's Kubernetes, by default, populate this file with the proper configuration.

This means that we can provide two configuration files for development that would work out-of-the-box for both development environments (or any external cluster, for that matter).

For example, the following listing could be used to configure Docker for Desktop's Kubernetes to add the required configuration for development:

managementClusters:
- name: docker-for-desktop
applicationClusters:
- name: docker-for-desktop
  region: local
  capabilities: {}

And similarly, for Minikube:

managementClusters:
- name: minikube
applicationClusters:
- name: minikube
  region: local
  capabilities: {}

On production set-ups, the operator should create a context for each cluster that will be part of a Shipper cluster. Those contexts can be created by any tool, as long as it works with kubectl.

This means that in the following listing, production-europe-a, production-us-a and management-europe-a should exist as contexts before the file is applied:

managementClusters:
- name: management-europe-a
applicationClusters:
- name: production-europe-a
  region: eu
  capabilities:
    gdpr: true
- name: production-us-a
  region: us
  capabilities: {}

One should be able to store this file inside, for example, a git repository (so it plays well with GitOps).

In order to add another Kubernetes cluster to a Shipper installation, it'd be a matter of 1) add a context for the cluster to be added and 2) add the appropriate configuration in the cluster configuration file.

The listing below exemplifies the addition of a cluster in China:

managementClusters:
- name: management-europe-a
applicationClusters:
- name: production-europe-a
  region: eu
  capabilities:
    gdpr: true
- name: production-us-a
  region: us
  capabilities: {}
- name: production-cn-a
  region: cn
  capabilities: {}

We also discussed about adding a context field in managementClusters and applicationClusters items; this would result using the connection configuration of this context, but use name as cluster name in the Management Cluster (although its usefulness should be verified):

applicationClusters:
- name: production-europe-a
  region: eu
  context: gke-prod-eu-a

We also slightly touched some of the implementation details of this approach: the API we have available from client-go makes it extremely easy to get the current context's configuration but not so easy to read the file as a whole. One initial approach could be iterate all the clusters by modifying the current context (think kubectl config set-context production-europe-a) before loading the configuration file through its API, and then use the connection configuration to perform the appropriate actions in the remote Kubernetes cluster.

We believe that, having this command available, would reduce our quick-start guide to the following set of actions:

  • Download the shipperctl artifact from Shipper's repository: curl https://www.github.com/bookingcom/shipper/.../shipper-linux

  • Download a cluster configuration file (both for Minikube and Docker for Desktop's Kubernetes) from Github, and apply it using shipperctl: curl https://www.github.com/bookingcom/shipper/blob/.../minikube.yaml | shipperctl -f -

  • Download Shipper's deployment manifest, and apply it using kubectl: curl https://www.github.com/bookingcom/shipper/blob/.../deployment.yaml | kubectl apply -f -

  • Download an example application manifest from Github, and apply it using kubectl: curl https://www.github.com/booking.com/shipper/blob/.../application.yaml | kubectl apply -f -

  • Modify the application's .spec.template to start a Release process and walk the user through the strategy steps.

@kanatohodets
Copy link
Contributor Author

This RFC has been merged in #29, and Parham is now hacking on the implementation.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants