Skip to content

Commit

Permalink
Merge pull request #29 from shirleyleu/master
Browse files Browse the repository at this point in the history
Suggest improvements in English
  • Loading branch information
depado committed May 17, 2018
2 parents fc94fb3 + 178d2a5 commit 7b5ce59
Showing 1 changed file with 67 additions and 67 deletions.
134 changes: 67 additions & 67 deletions pages/drone-helm-kubernetes-rbac-1.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,18 +9,18 @@ draft: false
# Introduction

Continuous integration and delivery is hard. This is a fact everyone can agree
with. But now we have all these awesome technologies and the problem is mainly:
"How do I plug this with that?", or "How do I make these two products work
together?".
on. But now we have all this wonderful technology and the problems are mainly
"How do I plug this with that?" or "How do I make these two products work
together?"

Well, there's **never** a simple and universal answer to these questions. In
Well, there's **never** a simple and universal answer to these questions. In
this article series we'll progressively build a complete pipeline for continuous
integration and delivery using three popular products, namely Kubernetes, Helm
and Drone.

This first article acts as an introduction to the various technologies used
throughout the article series. It is intended for beginners that have some
knowledge of Docker, how container works and the basics of Kubernetes. You can
This first article acts as an introduction to the various technology used
throughout the series. It is intended for beginners that have some
knowledge of Docker, how a container works and the basics of Kubernetes. You can
entirely skip it if you have a running k8s cluster and a running Drone instance.

## Steps
Expand All @@ -29,52 +29,52 @@ entirely skip it if you have a running k8s cluster and a running Drone instance.
- Create a service account for Tiller
- Initialize Helm
- Add a repo to Helm
- Deploy Drone on our new k8s cluster
- Deploy Drone on the new k8s cluster

## Technologies involved
## Technology involved

### Drone

[Drone](https://drone.io/) is a Continuous Delivery platform built on Docker and
written in Go. Drone uses a simple YAML configuration file, a superset of
written in Go. Drone uses a simple YAML configuration file, a superset of
docker-compose, to define and execute Pipelines inside Docker containers.

It has the same approach as [Travis](https://travis-ci.org/), where you define
It has the same approach as [Travis](https://travis-ci.org/), where you define
your pipeline as code in your repository. The cool feature is that every step in
your pipeline is executed in a Docker container. This may seem counter-intuitive
at first but it enables a great plugin system: Every plugin for Drone you might
use is a Docker image, which Drone will pull when needed. You have nothing to
install directly in Drone as you would do with Jenkins for example.

Another benefit of running inside Docker is that the
[installation procedure](http://docs.drone.io/installation/) for Drone is really
Another benefit of running inside Docker is that the
[installation procedure](http://docs.drone.io/installation/) for Drone is really
simple. But we're not going to install Drone on a bare-metal server or inside a
VM. More on that later in the tutorial.

### Kubernetes

> Kubernetes (commonly stylized as K8s) is an open-source
> container-orchestration system for automating deployment, scaling and
> management of containerized applications that was originally designed by
> Kubernetes (commonly stylized as K8s) is an open-source
> container-orchestration system for automating deployment, scaling and
> management of containerized applications that was originally designed by
> Google and now maintained by the Cloud Native Computing Foundation. It aims to
> provide a "platform for automating deployment, scaling, and operations of
> provide a "platform for automating deployment, scaling, and operations of
> application containers across clusters of hosts". It works with a range of
> container tools, including Docker.
> <cite>[Wikipedia](https://en.wikipedia.org/wiki/Kubernetes) </cite>
Wikipedia summarizes k8s pretty well. Basically k8s will abstract the underlying
machines on which it runs and offer us a platform where we can deploy our
applications. It is in charge of distributing correctly our containers on
different nodes so if one node shutdowns or is disconnected from the network,
our application is still accessible while k8s is working to repair the node or
provision a new one for us.
Wikipedia summarizes k8s pretty well. Basically k8s abstracts the underlying
machines on which it runs and offers a platform where we can deploy our
applications. It is in charge of distributing our containers correctly on
different nodes so if one node shuts down or is disconnected from the network,
the application is still accessible while k8s works to repair the node or
provisions a new one for us.

I recommend reading at least the [Kubernetes Basics](https://kubernetes.io/docs/tutorials/kubernetes-basics/)
for this tutorial.
I recommend at least reading [Kubernetes Basics](https://kubernetes.io/docs/tutorials/kubernetes-basics/)
for this tutorial.

### Helm

[Helm](https://helm.sh/) is the package manager for Kubernetes. It allows to
[Helm](https://helm.sh/) is the package manager for Kubernetes. It allows us to
create, maintain and deploy applications in a Kubernetes cluster.

Basically if you want to install something in your Kubernetes cluster you can
Expand All @@ -83,29 +83,29 @@ Drone to deploy it.

Helm allows you to deploy your application to different namespaces, change the
tag of your image and basically override every parameter you can put in your
Kubernetes deployment files when running it. Which means you can use the same
Kubernetes deployment files when running it. This means you can use the same
chart to deploy your application in your staging environment and in production
simply by overriding some values on the command line or in a values file.
simply by overriding some values on the command line or in a values file.

In this article we'll see how to use an already existing chart. In the next one
In this article we'll see how to use a preexisting chart. In the next one
we'll see how to create one from scratch.

## Disclaimers

In this tutorial, we'll use [Google Cloud Platform](https://cloud.google.com)
because it allows to create Kubernetes clusters easily and has a private
In this tutorial, we'll use [Google Cloud Platform](https://cloud.google.com)
because it allows to create Kubernetes clusters easily and has a private
container registry which we'll use later.

Also, we're not going to handle TLS on our Drone deployment. That's because the
technology I used to handle TLS certificates using Let's Encrypt,
[kube-lego](https://github.com/jetstack/kube-lego), is now deprecated in favor
Also we're not going to handle TLS on our Drone deployment. That's because the
technology I used to handle TLS certificates using Let's Encrypt,
[kube-lego](https://github.com/jetstack/kube-lego), is now deprecated in favor
of [cert-manager](https://github.com/jetstack/cert-manager/).

# Kubernetes Cluster

<img src="/assets/kube-drone-helm/kube.png" style="max-height: 100px;" />

_You can skip this step if you already own a k8s cluster with Kubernetes version above
_You can skip this step if you already own a k8s cluster with a Kubernetes version above
1.8._

In this step we'll need the `gcloud` and `kubectl` CLI. Check out how to [install
Expand All @@ -114,21 +114,21 @@ system.

As said earlier, this tutorial isn't about creating and maintaining a Kubernetes
cluster. As such we're going to use [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/)
to create our playground cluster. There are two options to create it, either
to create our playground cluster. There are two options to create it: either
in the web interface offered by GCP, or directly using the `gcloud` command.
At the time of writing, the default version of k8s offered by Google is `1.8.8`,
but as long as you're above `1.8` you can pick whichever version you want.
At the time of writing, the default version of k8s offered by Google is `1.8.8`,
but as long as you're above `1.8` you can pick whichever version you want.
_Even though there's no reason not to pick the highest stable version..._

The `1.8` choice is because in this version [RBAC](https://en.wikipedia.org/wiki/Role-based_access_control)
The `1.8` choice is because in this version [RBAC](https://en.wikipedia.org/wiki/Role-based_access_control)
is activated by default and is the default authentication system.

To reduce the cost of your cluster you can modify the machine type, but try to
keep at least 3 nodes, this will allow zero-downtime migrations to different
keep at least 3 nodes; this will allow zero-downtime migrations to different
machine types and upgrade k8s version if you ever want to keep this cluster
active and running.

To verify your cluster is running, you can check the output of the following
To verify if your cluster is running, you can check the output of the following
command:

```
Expand All @@ -137,12 +137,12 @@ NAME MASTER_VERSION MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
mycluster 1.9.7-gke.0 custom-1-2048 1.9.7-gke.0 3 RUNNING
```

You should also get the `MASTER_IP`, `PROJECT` the `LOCATION`, which I removed
from this snippet. From now on, in the code snippets and command line examples,
You should also get the `MASTER_IP`, `PROJECT`, and the `LOCATION` which I removed
from this snippet. From now on in the code snippets and command line examples,
`$LOCATION` will refer to your cluster's location, `$NAME` will refer to your
cluster name, and `$PROJECT` will refer to your GCP project.
cluster's name, and `$PROJECT` will refer to your GCP project.

Once your cluster is running, you can then issue the following command to
Once your cluster is running, you can then issue the following command to
retrieve the credentials to connect to your cluster:

```
Expand All @@ -158,21 +158,21 @@ on your cluster. To check it out:
$ kubectl cluster-info
```

This will print out every information you need to know about where your cluster
This will print out all the information you need to know about where your cluster
is located.

# Helm and Tiller

<img src="/assets/kube-drone-helm/helm.png" style="max-height: 100px;" />

First of all we'll need the `helm` command. [See this page for installation
First of all we'll need the `helm` command. [See this page for installation
instructions](https://github.com/kubernetes/helm/blob/master/docs/install.md).

Helm is actually composed of two parts. Helm itself is the client, and Tiller
is the server. Tiller needs to be installed in our k8s cluster so Helm can
work with it, but first we're going to need a **service account** for Tiller.
That means Tiller must be able to interact with our k8s cluster, it needs to
be able to create deployments, configmaps, secrets, and so on. Welcome to
Tiller must be able to interact with our k8s cluster, so it needs to
be able to create deployments, configmaps, secrets, and so on. Welcome to
**RBAC**.

So let's create a file named `tiller-rbac-config.yaml`
Expand Down Expand Up @@ -206,15 +206,15 @@ In this yaml file we're declaring a [ServiceAccount](https://kubernetes.io/docs/
named tiller, and then we're declaring a [ClusterRoleBinding](https://kubernetes.io/docs/admin/authorization/rbac/#rolebinding-and-clusterrolebinding)
which associates the tiller service account to the cluster-admin authorization.
Now we can deploy tiller using the service account we just created like this :
Now we can deploy tiller using the service account we just created like this:
```
$ helm init --service-account tiller
```

Note that it's not necessarily a good practice to deploy tiller this way. Using
RBAC, we can limit the actions Tiller can execute in our cluster and the
namespaces it can act on.
Note that it's not necessarily good practice to deploy tiller this way. Using
RBAC, we can limit the actions Tiller can execute in our cluster and the
namespaces it can act on.
[See this documentation](https://github.com/kubernetes/helm/blob/master/docs/rbac.md)
to see how to use RBAC to restrict or modify the behavior of Tiller in your k8s
cluster.
Expand All @@ -231,17 +231,17 @@ later use this service account to interact with k8s from Drone.
If you have a domain name and wish to associate a subdomain to your Drone
instance, you will have to create an external IP address in your Google Cloud
console. Give it a name and remember that name, we'll use it right after when
configuring the Drone chart.
configuring the Drone chart.

Associate this static IP with your domain (and keep in mind, DNS propagation
Associate this static IP with your domain (and keep in mind DNS propagation
can take some time).

For the sake of this article, the external IP address name will be `drone-kube`
and the domain will be `drone.myhost.io`.

## Integration

First, we need to setup a Github integration for our Drone instance. Have a look
First we need to setup Github integration for our Drone instance. Have a look
at [this documentation](http://docs.drone.io/install-for-github/) or if you're
using another version control system, check in the Drone documentation how to
create the proper integration. Currently, Drone supports the following VCS:
Expand All @@ -259,9 +259,9 @@ the environment variables in the next section need to match.

## Chart and configuration

After a quick Google search, we can see there's a [Chart for Drone](https://github.com/kubernetes/charts/tree/master/incubator/drone).
After a quick Google search, we can see there's a [Chart for Drone](https://github.com/kubernetes/charts/tree/master/incubator/drone).
And it's in the `incubator` of Helm charts, so first we need to add the repo to
helm.
Helm.

```
$ helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/
Expand Down Expand Up @@ -297,30 +297,30 @@ server:
DRONE_GITHUB_SECRET: "same thing with the secret"
```
Alright ! We have our static IP associated with our domain. We have to put
the name of this reserved IP in the ingress' annotations so it knows to which
Alright! We have our static IP associated with our domain. We have to put
the name of this reserved IP in the Ingress' annotations so it knows to which
IP it should bind. We're going to use a GCE load balancer, and since we don't
have a TLS certificate, we're going to tell the Ingress that it's OK to accept
have a TLS certificate, we're going to tell Ingress that it's OK to accept
HTTP connections. (Please don't hit me, I promise we'll see how to enable TLS
later)
later.)
We also declare all the variables used by Drone itself to communicate with our
VCS, in this case Github.
That's it. We're ready. Let's fire up Helm !
That's it. We're ready. Let's fire up Helm!
```
$ helm install --name mydrone -f values.yml incubator/drone
```

Given that your DNS record is now propagated, you should be able to access your
drone instance using the `drone.myhost.io` URL !
Drone instance using the `drone.myhost.io` URL!

# Conclusion

In this part we saw how to deploy a Kubernetes cluster on GKE, how to create
In this article we saw how to deploy a Kubernetes cluster on GKE, how to create
a service account with the proper cluster role binding to deploy Tiller, how
to use helm and how to deploy a chart with the example of drone.

In the next part we'll see how to write a quality pipeline for a Go project as
well as how to push to Google Cloud Registry.
In the next article we'll see how to write a quality pipeline for a Go project as
well as how to push to Google Cloud Registry.

0 comments on commit 7b5ce59

Please sign in to comment.