Skip to content
How-To. Kubernetes orchestrated cluster. DevOps platform. Asset storage over https. Production, auto-scaled container hosting.
Shell JavaScript
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.idea
bin
kube
.gitignore
.helmignore
README.md

README.md

homodigit.us

Why you ask? Where the name comes from is another story entirely but in short this is a tool to help you, my fellow coder, go smoothly down your own path. It didn't hurt to hear it is google's aptly named borg project that got open-sourced. Short is, it went viral... go ahead, click me. If you did you are probably as blown away as I was.

This journey started for me because I was attempting to install Neo4j on my local machine to give it a shot. I ran into numerous issues due to a Java Runtime dependency conflict. That led me to Docker. What a fabulous and terrible tool! It sure is nice when it just works. However, if you have ever tried to work with user/disk/file permissions like manually syncing content between the dev folder and the container you will understand the terrible part... I was working on a development tool called docker-development to help me solve some of these challenges and realized Docker is a mess of pipes, patches, port forwarding, etc. and I was still working on my local machine. Over numerous hours googling for solutions, I ran across this Kubernetes thing over and over. I realized the goal of Kubernetes was to do container orchestration, precisely my aim.

The Cloud Native Computing Foundation develops and manages all of what we are going to use. Which means it is as awesome as the combined budget of the sponsor lineup which is like infinite. They want, nay expect, this stuff to just work.

A note on longevity and developer fatigue. If you read the sponsors list you will get that this is now the defacto standard for production architecture. Not likely that whole group will just up and stop. It's not a single company like Facebook choosing to stop supporting React. Nor is it like one company doing the ultimate breaking change like Angular 1 -> 2. Both of those sentences will one-day lend themselves to their own blog post I suppose. For now lets move on to what we will accomplish.

Big Picture

  • reliable server platform to provide scalable, available hosting
  • CI/CD Platform to handle devops
    1. if one pushes changes to a master branch and those changes get tested and put into production
    2. if one pushes changes to a "git checkout -b new-feature" and add a 'git tag -a feature-name -m "added a great new feature"' it will build that branch in a staging area and create a routing rule to access it"

The End Goals:

  • focus on developer ergonomics and efficiency so coding time can be spent coding
  • describe big picture architectural patterns so you can understand why they were chosen should you choose differently
  • utilize modern cloud-based architecture
  • learn how to minimize cost
  • ensure high-availability
  • limit static IP's and cloud load balancer rules
  • serve all content over https with free self-renewing certificates
  • serve api's from preemptible vm's
  • serve static assets from bucket storage
  • manage/automate production versioning and distribution
  • manage/automate staging on production for feature verification

containerd: the base wrapper and plumbing with which Docker-like containers can be run and accessed. Their words are "It manages the complete container life-cycle of its host system, from image transfer and storage to container execution and supervision to low-level storage to network attachments and beyond."

Kubernetes: a set of objects that help aid, create, run, stop, replicate, version, communicate with, handle authorization and otherwise interact with "workloads" in the form of "containerd's". Their words are "Production-Grade Container Orchestration." yea... theirs is better

Helm: "The package manager for Kubernetes."

Before these tools, the big guys had huge teams of people that did this sort of thing. Now with the new tooling it is possible for small dev teams, and even individual developers to enjoy the same workflow automation and flexibility. Now that I am starting to get my feet under me as a developer, I am starting to get requests from friends to "help them with their website." Not only do I want to be able to host my own pet projects but I would also like to be able to provide high availability for my friends and their projects as well. Insert homodigit.us. Nice.

In this repository you will find everything you need, as well as many links to the information resources that I found useful while building this. They are a base resource so you can understand what is going on when you successful deploy and secure your projects, and those of your friends/clients. I use this project personally and it is as much a resource for me to not have to look up commands as a tool for you to follow (after all im following it myself). Good luck and feel free to reach out to me if you have any questions or run into any issues you can't seem to solve with a stack-overflow search.


Kubernetes orchestrated cluster. DevOps platform. Static assets over https. Auto-scaling production server hosting (for when you go viral).

To the technical part - The Big Picture

  1. Deploy Kubernetes cluster
  2. Install Helm/Tiller
  3. Setup ssl for for Helm/Tiller to communicate
  4. Configure/Deploy nginx
  5. Route traffic from static IP through ingress object to nginx
  6. Deploy certbot to provide and maintain ssl certificate for nginx
  7. Configure/Deploy Jenkins

Step 1) Deploy a Kubernetes cluster

The discussion about how to size and setup your cluster goes beyond a simple do this or do that, because needs and costs vary widely. The platform specific stuff from this step will be nuanced differently on different hosts but step 2 on its the all same no matter where there cluster lives. I use google now but I've actually chosen to make as many of the pieces platform agnostic so I wont be locked into them should pricing change suddenly.

The first question you should ask yourself is how available does your cluster need to be? Can you get away with one of your applications going down for 15-20 seconds if it crashes, like for a sandbox website or for a non-critical environment? Or, are you handling real-time transactions that happen on a millisecond timescale? We will shoot for something in the middle where there will be virtually no downtime but the idea of 99.999999% uptime isn't necessary.

Next, does the site's target client live in one region, like the US or Asia, or are they global? We will assume one region. To be fair, if you have a global audience and need high availability, this doc is a great executive overview to manage someone doing that, like as part of their full-time job. Wink. Coincidentally, I'm for hire as this is one of my projects for coding boot camp. What can I say, I'm an overachiever...

We have another choice to make is how many masters we want. That is silly to ask when I haven't introduced them. Watch me and read me. OK, you are back. Chances of a data center going down and taking your master with it are small but possible so placing redundant masters in different physical facilities can make sense depending on your use case. In google's case the master does not count toward your number of nodes, its sort of baked in, but if the data center goes offline so does your app. It's NOT a common thing but it's a point of note.

So to summarize, from least to most available:

  • 1 node, 1 master, 1 zone, 1 region
  • many nodes, 1 master, 1 zone, 1 region
  • many nodes, many masters, 1 zone, 1 region
  • many nodes, 1 master, many zones, 1 region
  • many nodes, many masters, many zones, 1 region (regional cluster and the next topic up)
  • many nodes, many masters, many zones, many regions

The default size for a cluster on gke is three nodes with one being the master. If you choose a regional cluster the default number of nodes will be three and all three will be masters. There is a sticky wicket here. On google creating a "1" node regional cluster means 1 node in each zone of the region with redundant masters. So if you try and to a "3" node regional cluster what you will get is is three node pools spread across three zones of the region and they will all have three nodes, of which one in each will be the a master.

We can also autoscale our pod VM count with the --enable-autoscale flag. Here is the doc link. It covers the --max-nodes= and --min-nodes= flags as well as a note about how it affects node pools..

We are using --preemptible machines which means google can pull them at any point but we get a huge cost savings of about 70%!! We are using Kubernetes though so that doesn't matter. Unless you are serving persistent data or need to ensure a cache is always warm, swapping one identical, stateless process for another identical, stateless process doesn't matter by definition.

http.createServer((req, res, next) => res.write('<h1>hello, this is pod ' + podName + ' reporting, yo...</h1>'));

There are few levels of "persistent" data we should discuss. The fastest will be RAM storage. While not really persistent it affects the next type. Should memory run out it will overflow into a swap file.

The second fastest will be persistent disks. They can be sized with the --disk-size=DISK_SIZE flag. This is where disk access actually happens when the "file structure" is called from a workload. Like memory overruns that require swap files, and thus where your warmed cache will live, like a database. If you want your db "to be faster" you can reduce latency by caching some of the frequent query responses and returning that instead of a fresh query. At some point space in memory will run out and if the cache is set to save more than memory will allow it goes to the disk, as a swap file. To solve that problem you can either change the --machine-type flag to one with more memory or you can increase the disk speed so the cache will be quicker using the --disk-type=pd-ssd flag.

Persistent storage, like the database files that aren't in cache, are best handled by another method. We will get there. Serving stateful data (REDIS, SQL, etc.) no matter where it lives is a more advanced topic worthy of its own discussion. For now use mLab and follow me to read that post when it happens.

This is the full google discussion about disks.

Each VM we create will be billed at the rates listed here.

Pricing for Google Compute Engine Instances.

We need to set up our project for the gcloud sdk. We are creating a regional cluster for better availability because the nodes will be spread across different zones (ie data centers) so we need to set the compute/region, however if you are going with a zonal cluster you will want to set your compute/zone. The minimum number of nodes for a regional cluster is 3 so if you want less than that go with a zone based cluster.

You will need an account with google because these are all billable services. Just for you, actually everyone you arent that special, they are giving away $300 of free credit to try out the platform and see if you like it. Go here for it.

Fist things first, install gcloud and while you are there poke around the commands a bit... If you are too busy for that you can

read the settings for the next command 'gcloud config set'

gcloud config set project **PROJECT_NAME**

gcloud config set compute/region **REGION**

read the settings for the next command 'gcloud container clusters create'

gcloud container clusters create **CLUSTER_NAME** --region us-central1 --machine-type=n1-standard-1 --num-nodes=1 --scopes=gke-default,storage-full --enable-autorepair --enable-autoupgrade --preemptible

/*_ add section for gcloud commands to check status of cluster _/

Now that we have a cluster up and running lets set up our machine to interact with it. The gcloud command is a way to provision and interact with the google cloud infrastructure. Ie stuff that is billable. A virtual machine is billable. Storage is billable.

The idea of having many instances of an app running is different. That is a workload. Workloads run on machines (or virtual machines in this case) and machines can run many different workloads. My computer can run a development instance of mongodb, a few nodejs servers and a webpack-dev-server right? "Hardware" is handled by gcloud whereas the workloads are handled by kubectl. kubectl is the command line tool that allows one to interact with a cluster and its workloads, etc.

This is a nuance and an important one so you know where to look for the right command. You can scale the number of nodes (virtual machines) you have running in your cluster with gcloud. Those cost money for each one right. You can also interact with bucket storage and other provider level objects through that command. Whereas one can scale the number of instances of a database (replication) from kubectl. That is scaling a workload and a topic for below.

For now we will need to transfer our cluster information from gcloud to kubectl. They are designed to work together and gcloud will help set up kubectl.

read the settings for the next command 'gcloud container clusters get-credentials'

gcloud container clusters get-credentials **CLUSTER_NAME** --region **REGION**

read the settings for the next command 'kubectl config' and will allow us to verify that the prior gcloud command worked as planned.

kubectl config current-context will show you what cluster you will communicate with when entering commands into kubectl.

kubectl config view will show you the complete config. You can find the full config file on disk with cat ~/.kube/config


2) Facilitate secure communication between Helm and Tiller

Videos to watch

Info Links

These are tasks we need to complete

  1. create a private key for our CA (Certificate Authority)
  2. create a conf file for our CA
  3. use the key and conf file to create a self signed certificate for the CA
  4. create a conf file for our intermediate CA
  5. use that conf file to create a CSR and key for the intermediate CA
  6. use those plus the CA cert, key and password to create the intermediate CA cert
  7. concat the certs in proper order to create the chain
  8. create CSR, cert and key set using intermediate CA key and password for both Tiller and Helm

Or you can conveniently use the script I wrote to make the mundane easy. Its found in /bin/ssl/makeInitSet.sh. You will need to enter some configuration details into rootCA.conf and X509.conf in the directory with the script.

Here is the link to the openssl website for writing a X059 configuration file and for the Root CA configuration file and below are the commands we called in that script. Each is a link to the docs for info on the flags used.

read the settings for the next command openssl genrsa - RSA key generation utility

openssl genrsa -out new.key 4096

read the settings for the next command openssl req - certificate generating utility

cat rootCA.conf | openssl req -key ca.key -new -x509 -days 7300 -sha256 -out ca.crt -extensions v3_ca

openssl req -new -sha256 -nodes -newkey rsa:4096 -keyout new.key -out new.csr -config <( cat X509.conf )

read the settings for the next command openssl x509 - certificate signing utility

openssl x509 -req -days 500 -sha256 -in new.csr -out new.crt -CA signatory.crt -CAkey signatory.key -CAcreateserial -extfile X509.conf -passin "pass:password"


3) Install Helm/Tiller

Videos to watch

The docs on the Helm website are extensive and well written.

Installing Helm

The docs are for more than what we will need but they are good reference for the big picture. You can find all of the commands we will actually use below. Get Helm installed. After you are done read through this.

Using SSL Between Helm and Tiller. You will notice that most of the instructions are for creating the ssl certificates that we did above. There are some commands that we will need to add as they are specific to our use case. They make brief mention of setting the --tiller-namespace flag and --service-account flag. The service account, with a role-binding is precisely what we will need to add first. Click on those links to see what kubernetes objects they are talking about, but you won't want to digest it all at the moment.

A thorough discussion on Roles Based Access Control is a full course on its own. So is the theory of namespace management and why/when they are applicable. It also happens that I haven't managed kubernetes at scale (ie google scale) and many of those features are for keeping Jr. Dev's on one team from killing the whole cluster of a large organization.

read the settings for the next command kubectl apply

kubectl apply -f kube/tiller/tiller.service-account.yaml

kubectl apply -f kube/tiller/tiller.role-binding.yaml

That is one of the few times that you will want to apply cluster state with kubectl directly. From here on it will be easier to use Helm. Now you may have asked what the heck did I mean by apply state? Kubernetes is declarative, meaning "I want 3 copies of my node application to share the incoming web traffic evenly" so the system will go and make you three copies and make sure they are running with health checks. If one stops responding it will get rid of the pod and spin up a new one. Sounds simple. It's not, watch this for a great look under the hood of how kubectl apply works.

read the settings for the next command helm init

helm init --service-account tiller --upgrade

helm init --service-account tiller --tiller-tls --tiller-tls-verify --tiller-tls-cert ssl/tiller.pem --tiller-tls-key ssl/tiller.key --tls-ca-cert ssl/ca.crt


4) Network Topology, Ingress and NGINX

Ingress is a simple idea in principle and can be solved very simply... and expensively. Just throw a Cloud Load Balancer in front of your cluster and have it do all the routing to the individual services for you. And then realize its as expensive as all of your VM's put together.

There are a HUGE number of solutions online that one can follow. All of them are different. Many use a different "plugin this" or different "addon that" and a bunch look to be hand coded! It's been exceedingly difficult to try and find an "industry consensus" I can point us all to and to.

Ingress, as it appears is also quite a contentious issue online. And I quote a stack overflow troll, "Everyone's architecture is a snowflake and needs its own solution" followed by grumbles. It also is a really broad a rather complex set of problems that seem easy till you try. Reading this gave great insight into some very sticky wickets. Here are some more resources to reference.

Videos

Blogs/Info

Big picture we are going to need to use a cloud load balancer with at a minimum, one forwarding rule. I have tried searching for nearly two days now and the best thing I can find are a couple of GCE and GKE examples similar to this. The technique relies on exposing a service via a hack using the internal IP of the node and calling it the external IP of the service. It also requires setting up firewall rules and updating the DNS record every time a node is restarted. This is very cheesy and is most certainly for test situations. The comments even note that "while this does work its not a production solution." If you want to get a cluster online that only you use it will work. Sheesh...

We will need to pay for at least the bare minimum load balancing costs but we will get 5 routing rules included. One of which will route all traffic to our nginx ingress controllers. We will deploy 3 pods and set the antiaffinity property on our deployment to motivate them to be one in each zone as apposed to potentially clustering on one node. The cloud load balancer will use its routing mechanisms to spread traffic amongst our three nginx instances and they will proxy all requests to our resources. They will also handle tls termination for the requests.

You can’t perform that action at this time.