diff --git a/content/en/docs/contribute/style/style-guide.md b/content/en/docs/contribute/style/style-guide.md index c7f020a5fc1a2..9366d76a7d7a6 100644 --- a/content/en/docs/contribute/style/style-guide.md +++ b/content/en/docs/contribute/style/style-guide.md @@ -385,29 +385,6 @@ The output is: Beware. {{< /warning >}} -### Katacoda Embedded Live Environment - -This button lets users run Minikube in their browser using the Katacoda Terminal. -It lowers the barrier of entry by allowing users to use Minikube with one click instead of going through the complete -Minikube and Kubectl installation process locally. - -The Embedded Live Environment is configured to run `minikube start` and lets users complete tutorials in the same window -as the documentation. - -{{< caution >}} -The session is limited to 15 minutes. -{{< /caution >}} - -For example: - -``` -{{}} -``` - -The output is: - -{{< kat-button >}} - ## Common Shortcode Issues ### Ordered Lists diff --git a/content/en/docs/tutorials/hello-minikube.md b/content/en/docs/tutorials/hello-minikube.md index 13ec7995341fb..9a471e815a9af 100644 --- a/content/en/docs/tutorials/hello-minikube.md +++ b/content/en/docs/tutorials/hello-minikube.md @@ -98,9 +98,10 @@ Pod and restarts the Pod's Container if it terminates. Deployments are the recommended way to manage the creation and scaling of Pods. 1. Use the `kubectl create` command to create a Deployment that manages a Pod. The -Pod runs a Container based on the provided Docker image. + Pod runs a Container based on the provided Docker image. ```shell + # Run a test container image that includes a webserver kubectl create deployment hello-node --image=registry.k8s.io/e2e-test-images/agnhost:2.39 -- /agnhost netexec --http-port=8080 ``` @@ -162,7 +163,7 @@ Kubernetes [*Service*](/docs/concepts/services-networking/service/). The `--type=LoadBalancer` flag indicates that you want to expose your Service outside of the cluster. - The application code inside the image `registry.k8s.io/echoserver` only listens on TCP port 8080. If you used + The application code inside the test image only listens on TCP port 8080. If you used `kubectl expose` to expose a different port, clients could not connect to that other port. 2. View the Service you created: @@ -236,7 +237,7 @@ The minikube tool includes a set of built-in {{< glossary_tooltip text="addons" The 'metrics-server' addon is enabled ``` -3. View the Pod and Service you created: +3. View the Pod and Service you created by installing that addon: ```shell kubectl get pod,svc -n kube-system @@ -286,7 +287,7 @@ kubectl delete service hello-node kubectl delete deployment hello-node ``` -Stop the minikube cluster: +Stop the Minikube cluster ```shell minikube stop diff --git a/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html b/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html index 7a0a679b53081..122ed7346ec08 100644 --- a/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html +++ b/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html @@ -1,6 +1,10 @@ --- title: Using Minikube to Create a Cluster weight: 10 +description: |- + Learn what a Kubernetes cluster is. + Learn what Minikube is. + Start a Kubernetes cluster. --- @@ -20,7 +24,7 @@

Objectives

@@ -84,18 +88,16 @@

Cluster Diagram

When you deploy applications on Kubernetes, you tell the control plane to start the application containers. The control plane schedules the containers to run on the cluster's nodes. The nodes communicate with the control plane using the Kubernetes API, which the control plane exposes. End users can also use the Kubernetes API directly to interact with the cluster.

-

A Kubernetes cluster can be deployed on either physical or virtual machines. To get started with Kubernetes development, you can use Minikube. Minikube is a lightweight Kubernetes implementation that creates a VM on your local machine and deploys a simple cluster containing only one node. Minikube is available for Linux, macOS, and Windows systems. The Minikube CLI provides basic bootstrapping operations for working with your cluster, including start, stop, status, and delete. For this tutorial, however, you'll use a provided online terminal with Minikube pre-installed.

+

A Kubernetes cluster can be deployed on either physical or virtual machines. To get started with Kubernetes development, you can use Minikube. Minikube is a lightweight Kubernetes implementation that creates a VM on your local machine and deploys a simple cluster containing only one node. Minikube is available for Linux, macOS, and Windows systems. The Minikube CLI provides basic bootstrapping operations for working with your cluster, including start, stop, status, and delete.

+ +

Now that you know more about what Kubernetes is, visit Hello Minikube + to try this out on your computer.

-

Now that you know what Kubernetes is, let's go to the online tutorial and start our first cluster!

-
-
-
- Start Interactive Tutorial -
+

Once you've done that, move on to Using kubectl to create a Deployment.

diff --git a/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html b/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html index 9bd834a55b292..f2e0b47180891 100644 --- a/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html +++ b/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html @@ -1,6 +1,9 @@ --- title: Using kubectl to Create a Deployment weight: 10 +description: |- + Learn about application Deployments. + Deploy your first app on Kubernetes with kubectl. --- @@ -14,26 +17,25 @@
-

Objectives

-
    -
  • Learn about application Deployments.
  • -
  • Deploy your first app on Kubernetes with kubectl.
  • -
-
+ +

Kubernetes Deployments

- Once you have a running Kubernetes cluster, you can deploy your containerized applications on top of it. - To do so, you create a Kubernetes Deployment configuration. The Deployment instructs Kubernetes + Once you have a running Kubernetes cluster, you can deploy your containerized applications on top of it. + To do so, you create a Kubernetes Deployment. The Deployment instructs Kubernetes how to create and update instances of your application. Once you've created a Deployment, the Kubernetes control plane schedules the application instances included in that Deployment to run on individual Nodes in the cluster.

-

Once the application instances are created, a Kubernetes Deployment Controller continuously monitors those instances. If the Node hosting an instance goes down or is deleted, the Deployment controller replaces the instance with an instance on another Node in the cluster. This provides a self-healing mechanism to address machine failure or maintenance.

+

Once the application instances are created, a Kubernetes Deployment controller continuously monitors those instances. If the Node hosting an instance goes down or is deleted, the Deployment controller replaces the instance with an instance on another Node in the cluster. This provides a self-healing mechanism to address machine failure or maintenance.

In a pre-orchestration world, installation scripts would often be used to start applications, but they did not allow recovery from machine failure. By both creating your application instances and keeping them running across Nodes, Kubernetes Deployments provide a fundamentally different approach to application management.

@@ -72,7 +74,7 @@

Deploying your first app on Kubernetes

-

You can create and manage a Deployment by using the Kubernetes command line interface, Kubectl. Kubectl uses the Kubernetes API to interact with the cluster. In this module, you'll learn the most common Kubectl commands needed to create Deployments that run your applications on a Kubernetes cluster.

+

You can create and manage a Deployment by using the Kubernetes command line interface, kubectl. Kubectl uses the Kubernetes API to interact with the cluster. In this module, you'll learn the most common Kubectl commands needed to create Deployments that run your applications on a Kubernetes cluster.

When you create a Deployment, you'll need to specify the container image for your application and the number of replicas that you want to run. You can change that information later by updating your Deployment; Modules 5 and 6 of the bootcamp discuss how you can scale and update your Deployments.

@@ -91,18 +93,69 @@

Deploying your first app on Kubernetes

For your first Deployment, you'll use a hello-node application packaged in a Docker container that uses NGINX to echo back all the requests. (If you didn't already try creating a hello-node application and deploying it using a container, you can do that first by following the instructions from the Hello Minikube tutorial). -

- -

Now that you know what Deployments are, let's go to the online tutorial and deploy our first app!

+

You will need to have installed kubectl as well. If you need to install it, visit install tools.

+

Now that you know what Deployments are, let's deploy our first app!


+
+
+

kubectl basics

+

The common format of a kubectl command is: kubectl action resource

+

This performs the specified action (like create, describe or delete) on the specified resource (like node or deployment). You can use --help after the subcommand to get additional info about possible parameters (for example: kubectl get nodes --help).

+

Check that kubectl is configured to talk to your cluster, by running the kubectl version command.

+

Check that kubectl is installed and you can see both the client and the server versions.

+

To view the nodes in the cluster, run the kubectl get nodes command.

+

You see the available nodes. Later, Kubernetes will choose where to deploy our application based on Node available resources.

+
+
- Start Interactive Tutorial +

Deploy an app

+

Let’s deploy our first app on Kubernetes with the kubectl create deployment command. We need to provide the deployment name and app image location (include the full repository url for images hosted outside Docker hub).

+

kubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1

+

Great! You just deployed your first application by creating a deployment. This performed a few things for you:

+
    +
  • searched for a suitable node where an instance of the application could be run (we have only 1 available node)
  • +
  • scheduled the application to run on that Node
  • +
  • configured the cluster to reschedule the instance on a new Node when needed
  • +
+

To list your deployments use the kubectl get deployments command:

+

kubectl get deployments

+

We see that there is 1 deployment running a single instance of your app. The instance is running inside a container on your node.

+
+
+

View the app

+

Pods that are running inside Kubernetes are running on a private, isolated network. + By default they are visible from other pods and services within the same kubernetes cluster, but not outside that network. + When we use kubectl, we're interacting through an API endpoint to communicate with our application.

+

We will cover other options on how to expose your application outside the kubernetes cluster in Module 4.

+

The kubectl command can create a proxy that will forward communications into the cluster-wide, private network. The proxy can be terminated by pressing control-C and won't show any output while its running.

+

You need to open a second terminal window to run the proxy.

+

kubectl proxy +

We now have a connection between our host (the online terminal) and the Kubernetes cluster. The proxy enables direct access to the API from these terminals.

+

You can see all those APIs hosted through the proxy endpoint. For example, we can query the version directly through the API using the curl command:

+

curl http://localhost:8001/version

+ +

The API server will automatically create an endpoint for each pod, based on the pod name, that is also accessible through the proxy.

+

First we need to get the Pod name, and we'll store in the environment variable POD_NAME:

+

export POD_NAME=$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
+ echo Name of the Pod: $POD_NAME

+

You can access the Pod through the proxied API, by running:

+

curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/

+

In order for the new Deployment to be accessible without using the proxy, a Service is required which will be explained in the next modules.

+
+ +
+
+

+ Once you're ready, move on to Viewing Pods and Nodes.

+

+
+
diff --git a/content/en/docs/tutorials/kubernetes-basics/explore/explore-intro.html b/content/en/docs/tutorials/kubernetes-basics/explore/explore-intro.html index af4b95004d7d8..956252b7d2c97 100644 --- a/content/en/docs/tutorials/kubernetes-basics/explore/explore-intro.html +++ b/content/en/docs/tutorials/kubernetes-basics/explore/explore-intro.html @@ -1,6 +1,10 @@ --- title: Viewing Pods and Nodes weight: 10 +description: |- + Learn how to troubleshoot Kubernetes applications using + kubectl get, kubectl describe, kubectl logs and + kubectl exec. --- @@ -105,12 +109,12 @@

Node overview

Troubleshooting with kubectl

-

In Module 2, you used Kubectl command-line interface. You'll continue to use it in Module 3 to get information about deployed applications and their environments. The most common operations can be done with the following kubectl commands:

+

In Module 2, you used the kubectl command-line interface. You'll continue to use it in Module 3 to get information about deployed applications and their environments. The most common operations can be done with the following kubectl subcommands:

You can use these commands to see when applications were deployed, what their current statuses are, where they are running and what their configurations are.

@@ -124,14 +128,72 @@

Troubleshooting with kubectl

-
- Start Interactive Tutorial +

Check application configuration

+

Let's verify that the application we deployed in the previous scenario is running. We'll use the kubectl get command and look for existing Pods:

+

kubectl get pods

+

If no pods are running, please wait a couple of seconds and list the Pods again. You can continue once you see one Pod running.

+

Next, to view what containers are inside that Pod and what images are used to build those containers we run the kubectl describe pods command:

+

kubectl describe pods

+

We see here details about the Pod’s container: IP address, the ports used and a list of events related to the lifecycle of the Pod.

+

The output of the describe subcommand is extensive and covers some concepts that we didn’t explain yet, but don’t worry, they will become familiar by the end of this bootcamp.

+

Note: the describe subcommand can be used to get detailed information about most of the Kubernetes primitives, including Nodes, Pods, and Deployments. The describe output is designed to be human readable, not to be scripted against.

+
+
+

Show the app in the terminal

+

Recall that Pods are running in an isolated, private network - so we need to proxy access + to them so we can debug and interact with them. To do this, we'll use the kubectl proxy command to run a proxy in a second terminal. Open a new terminal window, and in that new terminal, run:

+

kubectl proxy

+

Now again, we'll get the Pod name and query that pod directly through the proxy. + To get the Pod name and store it in the POD_NAME environment variable:

+

export POD_NAME="$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')"
+ echo Name of the Pod: $POD_NAME

+

To see the output of our application, run a curl request:

+

curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/proxy/

+

The URL is the route to the API of the Pod.

+
+
+ +
+
+

View the container logs

+

Anything that the application would normally send to standard output becomes logs for the container within the Pod. We can retrieve these logs using the kubectl logs command:

+

kubectl logs "$POD_NAME"

+

Note: We don't need to specify the container name, because we only have one container inside the pod.

+
+
+ +
+
+

Executing command on the container

+

We can execute commands directly on the container once the Pod is up and running. + For this, we use the exec subcommand and use the name of the Pod as a parameter. Let’s list the environment variables:

+

kubectl exec "$POD_NAME" -- env

+

Again, it's worth mentioning that the name of the container itself can be omitted since we only have a single container in the Pod.

+

Next let’s start a bash session in the Pod’s container:

+

kubectl exec -ti $POD_NAME -- bash

+

We have now an open console on the container where we run our NodeJS application. The source code of the app is in the server.js file:

+

cat server.js

+

You can check that the application is up by running a curl command:

+

curl http://localhost:8080

+

Note: here we used localhost because we executed the command inside the NodeJS Pod. If you cannot connect to localhost:8080, check to make sure you have run the kubectl exec command and are launching the command from within the Pod

+

To close your container connection, type exit.

+
+
+ + +
+

+ Once you're ready, move on to Using A Service To Expose Your App.

+

+
+ + diff --git a/content/en/docs/tutorials/kubernetes-basics/expose/expose-intro.html b/content/en/docs/tutorials/kubernetes-basics/expose/expose-intro.html index 680c45bdad788..9b4e3abd4a455 100644 --- a/content/en/docs/tutorials/kubernetes-basics/expose/expose-intro.html +++ b/content/en/docs/tutorials/kubernetes-basics/expose/expose-intro.html @@ -1,6 +1,10 @@ --- title: Using a Service to Expose Your App weight: 10 +description: |- + Learn about a Service in Kubernetes. + Understand how labels and selectors relate to a Service. + Expose an application outside a Kubernetes cluster. --- @@ -18,7 +22,7 @@

Objectives

@@ -28,9 +32,9 @@

Overview of Kubernetes Services

Kubernetes Pods are mortal. Pods have a lifecycle. When a worker node dies, the Pods running on the Node are also lost. A ReplicaSet might then dynamically drive the cluster back to the desired state via the creation of new Pods to keep your application running. As another example, consider an image-processing backend with 3 replicas. Those replicas are exchangeable; the front-end system should not care about backend replicas or even if a Pod is lost and recreated. That said, each Pod in a Kubernetes cluster has a unique IP address, even Pods on the same Node, so there needs to be a way of automatically reconciling changes among Pods so that your applications continue to function.

-

A Service in Kubernetes is an abstraction which defines a logical set of Pods and a policy by which to access them. Services enable a loose coupling between dependent Pods. A Service is defined using YAML (preferred) or JSON, like all Kubernetes objects. The set of Pods targeted by a Service is usually determined by a LabelSelector (see below for why you might want a Service without including a selector in the spec).

+

A Service in Kubernetes is an abstraction which defines a logical set of Pods and a policy by which to access them. Services enable a loose coupling between dependent Pods. A Service is defined using YAML or JSON, like all Kubernetes object manifests. The set of Pods targeted by a Service is usually determined by a label selector (see below for why you might want a Service without including a selector in the spec).

-

Although each Pod has a unique IP address, those IPs are not exposed outside the cluster without a Service. Services allow your applications to receive traffic. Services can be exposed in different ways by specifying a type in the ServiceSpec:

+

Although each Pod has a unique IP address, those IPs are not exposed outside the cluster without a Service. Services allow your applications to receive traffic. Services can be exposed in different ways by specifying a type in the spec of the Service: