Kubernetes Tutorials

Drew Sears edited this page Mar 4, 2017 · 6 revisions

Notes from reading https://kubernetes.io/docs/tutorials/

Kubernetes Basics

This is an interactive tutorial, using Katacoda to run a virtual terminal that runs Minikube.

Why Kubernetes

Containers make it easy for developers to deploy frequently with minimal downtime.

Kubernetes helps you run your containers anywhere, and make sure they have sufficient resources.

We're going to:

  • Create a cluster
  • Deploy an app
  • Explore it?
  • Expose it publicly
  • Scale it up
  • Deploy a new version

Using Minikube

Kubernetes clusters many computers together to act as a single pool of compute resources. To do this, applications must be containerized.

There are two cluster resource types:

  • Master coordinates the cluster
  • Nodes run applications

Masters do all of the scheduling work.

Nodes are workers. They can be VMs or physicals. Nodes run Kubelet, which manages the node and talks to the master.

When you deploy an application, the master schedules the container to run on the nodes. The nodes use the API to talk to the master. We'll test this out with Minikube.

Bootcamp module 1

minikube lets us create a k8s cluster on our local desktop.

kubectl lets us interact with the cluster.

kubectl cluster-info shows we have a master and a dashboard. The dashboard is a UI for showing your applications.

kubectl get nodes shows we have only one node. I guess it is both the master and the node?

Deployments

A deployment is responsible for creating and updating instances of your application container. Once deployed, the Deployment Controller monitors these instances and replaces them when they fail.

We can manage deployments with kubectl. This module will demonstrate common kubectl commands. A deployment specifies a container image and a number of desired replicas. We're going to test with a node.js app packaged in a docker container.

Deploying an app

kubectl version again to make sure things are working. kubectl version --help provides useful help about checking the version!

kubectl get nodes again shows one node.

Here's a bit better:

kubectl run kubernetes-bootcamp --image=docker.io/jocatalin/kubernetes-bootcamp:v1 --port=8080

I want to go back and find the Dockerfile and source code that go with this.

This finds a suitable node, schedules the application to run on that node and maintain one instance.

kubectl get deployments shows what we're running.

By default, deployed applications are visible only inside the cluster. Exposing them externally is covered later. We will use kubectl proxy to create a route from the app to our terminal. Don't quite get that yet.

So I think kubectl proxy basically lets you talk to the API on the master which proxies requests to the nodes. Maybe?

kubectl get pods spits out my list of pod names.

With the proxy running in the background on 8001, this lets me hit my pod.

curl http://localhost:8001/api/v1/proxy/namespaces/default/pods/$POD_NAME/

So, I have an app inside the pod and I think I told it to run on 8080 maybe? Can there be two containers in the pod which both expose a port, and if so, how does the above command know which port to use?

Pods

Our deployment created a pod. A pod represents a group of containers and shared resources.

Resources shared by all containers in a pod:

  • Shared storage (volumes)
  • Shared networking (single IP address)
  • Container metadata. Ports exposed and other configuration.

Pods contain tightly coupled sets of containers.

Let's translate units from VMware world:

  • vCenter -> Master
  • ESX host -> Node
  • Virtual machine -> Pod
  • Service -> Container

A pod containers one or more containers, volumes, and a single shared IP address. If two containers are not tightly coupled, don't put them in the same pod.

Pods run on nodes. Nodes are your actual compute resources, either physical or virtual. Each node runs at least:

  • kubelet, which communicates between the master and the nodes, and
  • a container runtime like docker or rkt, which pulls the image from a registry, unpacks and runs it

Common commands:

  • kubectl get - list resources
  • kubectl describe - show info about a resource
  • kubectl logs - print logs from a container in a pod
  • kubectl exec - run a command in a container in a pod

Tutorial - Exploring your app

kubectl describe pods spits out all kinds of pod metadata. You can use describe against most objects. It produces human readable output. Don't use it for scripting.

exec is pretty cool. kubectl exec -it $POD_NAME bash gets a shell.

https://kubernetes.io/docs/tutorials/kubernetes-basics/expose-intro/

Services

Pods have an IP that is routeable only within the K8s cluster, but not to the outside world. A service defines a logical group of pods, exposes it for external traffic (from outside the cluster), enables load balancing and service discovery. It's an entry point for something else to talk in.

You can either use LoadBalancer mode (expose a public IP) or NodePort (expose on the same port on every node). I have some questions about how both work.

So a service load balances traffic across a set of pods. If your service maps to a set of pods from a deployment, that's useful.

Services are also responsible for service discovery? OK. Services are matched to pods using label selectors.

Labels are key value pairs. The idea here is that hierarchical organization of pods is flawed, as people will come up with different hierarchies. Labels allow you to select for any set of labels and group in ad-hoc ways.

Tutorial - Exposing your app

We got a pod. We got the default service. We expose kubernetes-bootcamp with NodePort and now we have two services.

So I guess we just took our single instance and exposed it outside of the cluster-private network on the host's IP.

Labels

So our deployment added a default label, run=kubernetes-bootcamp, and we can query that to see the set of nodes associated with the deployment.

Yup, we can add a label and verify that it shows up.

We can query for the new label.

We can delete the service and confirm that it's gone, and that we can still enter the pod and see it running locally.

Scaling up

We can scale up by increasing the size of the replica set for our deployment.

K8s can autoscale based on CPU consumption; we'll cover that later.

If you have multiple instances, a service can load balance them automatically and monitor availability.

Demo

AVAILABLE defaults to zero. Does that mean it is not exposed?

So we scaled to 4 and now there are 4 available. Why was there not 1 available before?

Weird. AVAILABLE flips between 0 and 1. Not sure why. Maybe just still spinning up the instance?

Yup, we can scale deployments up and down. Yup, I can see that service requests are now load balanced.

Yup, we can scale down and confirm that it works.

Updating your app

Alright, we're going to do a rolling update. We better have a multi-replica deployment and a service for this.

Cool. kubectl set image to update the version. kubectl rollout undo to rollback to the last working version.

All done with Kubernetes Basics! On to another tutorial.

You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
Press h to open a hovercard with more details.