Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
author:
name: Andy Stevens
email: docs@linode.com
description: 'A high level overview of Kubernetes cluster.'
description: 'An introduction to Kubernetes concepts and components.'
keywords: ['kubernetes','k8s','beginner','architecture']
license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
published: 2019-03-21
Expand All @@ -19,7 +19,6 @@ external_resources:

*Kubernetes*, often referred to as *k8s*, is an open source container orchestration system that helps deploy and manage containerized applications. Developed by Google starting in 2014 and written in the Go language, Kubernetes is quickly becoming the standard way to architect horizontally-scalable applications. This guide will explain the major parts and concepts of Kubernetes.


## Containers

Kubernetes is a container orchestration tool and, therefore, needs a container runtime installed to work. In practice, the default container runtime for Kubernetes is [Docker](https://www.docker.com/), though other runtimes like [rkt](https://coreos.com/rkt/), and [LXD](https://linuxcontainers.org/lxd/introduction/) will also work. With the advent of the [Container Runtime Interface (CRI)](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/container-runtime-interface.md), which hopes to standardize the way Kubernetes interacts with containers, other options like [containerd](https://containerd.io/), [cri-o](https://cri-o.io/), and [Frakti](https://github.com/kubernetes/frakti) have also become available. This guide assumes you have a working knowledge of containers and the examples will all use Docker as the container runtime.
Expand Down Expand Up @@ -478,12 +477,19 @@ Two of the most popular options are [Flannel](https://github.com/coreos/flannel#

For more information on the Kubernetes networking model, and ways to implement it, consult the [cluster networking documentation](https://kubernetes.io/docs/concepts/cluster-administration/networking/).

## Next Steps
## Advanced Topics

There are a number of advanced topics in Kubernetes. Below are a few you might find useful as you progress in Kubernetes:

- [StatefulSets](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/) can be used when creating stateful applications.
- [DaemonSets](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) can be used to ensure each Node is running a certain Pod. This is useful for log collection, monitoring, and cluster storage.
- [Horizontal Pod Autoscaling](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) can automatically scale your deployments based on CPU usage.
- [CronJobs](https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/) can schedule [Jobs](#jobs) to run at certain times.
- [ResourceQuotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/) are helpful when working with larger groups where there is a concern that some teams might take up too many resources.
- [StatefulSets](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/) can be used when creating stateful applications.
- [DaemonSets](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) can be used to ensure each Node is running a certain Pod. This is useful for log collection, monitoring, and cluster storage.
- [Horizontal Pod Autoscaling](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) can automatically scale your deployments based on CPU usage.
- [CronJobs](https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/) can schedule [Jobs](#jobs) to run at certain times.
- [ResourceQuotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/) are helpful when working with larger groups where there is a concern that some teams might take up too many resources.

## Next Steps

Now that you are familiar with Kubernetes concepts and components, you can follow the [Getting Started with Kubernetes: Use kubeadm to Deploy a Cluster on Linode](/docs/applications/containers/getting-started-with-kubernetes/) guide. This guide provides a hands-on activity to continue learning about Kubernetes. If you would like to deploy a Kubernetes cluster on Linode for production use, we recommend using one of the following methods, instead:

- [How to Deploy Kubernetes on Linode with the k8s-alpha CLI](/docs/applications/containers/how-to-deploy-kubernetes-on-linode-with-k8s-alpha-cli/)
- [How to Deploy Kubernetes on Linode with Rancher 2.2](/docs/applications/containers/how-to-deploy-kubernetes-on-linode-with-rancher-2-2/)
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
author:
name: Linode Community
email: docs@linode.com
description: 'This guide will show you how to package a Hugo static site in a Docker container image, host the image on Docker Hub, and deploy the container image on a Kubernetes cluster running on Linode'
description: 'This guide will show you how to package a Hugo static site in a Docker container image, host the image on Docker Hub, and deploy the container image on a Kubernetes cluster running on Linode.'
keywords: ['kubernetes','docker','docker hub','hugo', 'static site']
license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
published: 2019-05-07
Expand Down Expand Up @@ -245,7 +245,7 @@ EXPOSE 80

echo -e "public/\n.git/\n.gitmodules/\n.gitignore" >> .dockerignore

1. Follow the steps 2 - 4 in the [Version Control the Site with Git](/docs/applications/containers/deploy-static-site-with-kubernetes/#version-control-the-site-with-git) section to add any new files created in this section to your local git repository.
1. Follow the steps 2 - 4 in the [Version Control the Site with Git](/docs/applications/containers/deploy-container-image-to-kubernetes/#version-control-the-site-with-git) section to add any new files created in this section to your local git repository.

### Build the Docker Image

Expand Down Expand Up @@ -358,7 +358,7 @@ The service will group together all pods for the Hugo site, expose the same port
The Hugo site's service manifest file will use the NodePort method to get external traffic to the Hugo site service. NodePort opens a specific port on all the Nodes and any traffic that is sent to this port is forwarded to the service. Kubernetes will choose the port to open on the nodes if you do not provide one in your service manifest file. It is recommended to let Kubernetes handle the assignment. Kubernetes will choose a port in the default range, `30000-32767`.

{{< note >}}
The k8s-alpha CLI creates clusters that are pre-configured with useful Linode service integrations, like the Linode Cloud Controller Manager (CCM) which provides access to Linode's load balancer service, [NodeBalancers](https://www.linode.com/nodebalancers). In order to use Linode's NodeBalancers you can use the [LoadBalancer service type](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer) instead of NodePort in your Hugo site's service manifest file. See the [link to CCM guide]() guide for more details.
The k8s-alpha CLI creates clusters that are pre-configured with useful Linode service integrations, like the Linode Cloud Controller Manager (CCM) which provides access to Linode's load balancer service, [NodeBalancers](https://www.linode.com/nodebalancers). In order to use Linode's NodeBalancers you can use the [LoadBalancer service type](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer) instead of NodePort in your Hugo site's service manifest file. For more details, see the [Kubernetes Cloud Controller Manager for Linode](https://github.com/linode/linode-cloud-controller-manager) GitHub repository.
{{</ note >}}

1. Create the manifest file for your service with the following content.
Expand Down Expand Up @@ -430,7 +430,7 @@ spec:
- The deployment's object `spec` states that the deployment should have 3 replica pods. This means at any given time the cluster will have 3 pods that run the Hugo site service.
- The `template` field provides all the information needed to create actual pods.
- The label `app: hugo-site` helps the deployment know which service pods to target.
- The `container` field states that any containers connected to this deployment should use the Hugo site image `mydockerhubusername/hugo-site:v1` that was created in the [Build the Docker Image](/docs/applications/containers/deploy-static-site-with-kubernetes/#build-the-docker-image) section of this guide.
- The `container` field states that any containers connected to this deployment should use the Hugo site image `mydockerhubusername/hugo-site:v1` that was created in the [Build the Docker Image](/docs/applications/containers/deploy-container-image-to-kubernetes/#build-the-docker-image) section of this guide.
- `imagePullPolicy: Always` means that the container image will be pulled every time the pod is started.
- `containerPort: 80` states the port number to expose on the pod's IP address. The system does not rely on this field to expose the container port, instead, it provides information about the network connections a container uses.

Expand Down Expand Up @@ -484,4 +484,9 @@ To avoid being further billed for your Kubernetes cluster, tear down your cluste

linode-cli k8s-alpha delete example-cluster

## Next Steps

Now that you are familiar with basic Kubernetes concepts, like configuring pods, grouping resources, and deploying services, you can deploy a Kubernetes cluster on Linode for production use by using the steps in the following guides:

- [How to Deploy Kubernetes on Linode with the k8s-alpha CLI](/docs/applications/containers/how-to-deploy-kubernetes-on-linode-with-k8s-alpha-cli/)
- [How to Deploy Kubernetes on Linode with Rancher 2.2](/docs/applications/containers/how-to-deploy-kubernetes-on-linode-with-rancher-2-2/)
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ The [Container Storage Interface](https://github.com/container-storage-interface

1. Deploy a cluster using Terraform and the [Linode Kubernetes Terraform installer](https://registry.terraform.io/modules/linode/k8s/linode/0.1.1).

1. Use kubeadm to manually deploy a Kubernetes cluster on Linode. You can follow the [Getting Started with Kubernetes: Use kubeadm to Deploy a Cluster on Linode ](http://localhost:1313/docs/applications/containers/getting-started-with-kubernetes/) guide to do this.
1. Use kubeadm to manually deploy a Kubernetes cluster on Linode. You can follow the [Getting Started with Kubernetes: Use kubeadm to Deploy a Cluster on Linode](/docs/applications/containers/getting-started-with-kubernetes/) guide to do this.

{{< note >}}
- If using the k8s-alpha CLI or the Linode Kubernetes Terraform installer methods to deploy a cluster, you can skip the [Installing the CSI Driver](#installing-the-csi-driver) section of this guide, since it will be automatically installed when you deploy a cluster.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@ Complete the steps outlined in this section on all three Linodes.

After installing the Kubernetes related tooling on all your Linodes, you are ready to set up the Kubernetes control plane on the master node. The control plane is responsible for allocating resources to your cluster, maintaining the health of your cluster, and ensuring that it meets the minimum requirements you designate for the cluster.

The primary components of the control plane are the kube-apiserver, kube-controller-manager, kube-scheduler, and etcd. kubeadm provides a way to easily initialize the Kubernetes master node with all the necessary control plane components. For more information on each of control plane component see the Beginner's Guide to Kubernetes.
The primary components of the control plane are the kube-apiserver, kube-controller-manager, kube-scheduler, and etcd. kubeadm provides a way to easily initialize the Kubernetes master node with all the necessary control plane components. For more information on each of control plane component see the [Beginner's Guide to Kubernetes](/docs/applications/containers/beginners-guide-to-kubernetes/).

In addition to the baseline control plane components, there are several *addons*, that can be installed on the master node to access additional cluster features. You will need to install a networking and network policy provider add on that will implement [Kubernetes' network model](https://kubernetes.io/docs/concepts/cluster-administration/networking/) on the cluster's pod network.

Expand Down Expand Up @@ -331,9 +331,7 @@ kube-node-2 Ready <none> 1d22h v1.14.1

## Next Steps

Now that you have a Kubernetes cluster up and running, you can begin experimenting with the various ways to configure pods, group resources, deploy services and expose them to the public internet.

To help you get started with this, move on to follow along with the [Deploy a Static Site on Linode using Kubernetes guide](/docs/applications/containers/deploy-static-site-with-kubernetes/).
Now that you have a Kubernetes cluster up and running, you can begin experimenting with the various ways to configure pods, group resources, and deploy services that are exposed to the public internet. To help you get started with this, move on to follow along with the [Deploy a Static Site on Linode using Kubernetes](/docs/applications/containers/deploy-container-image-to-kubernetes/) guide.

## Tear Down Your Cluster

Expand Down