Permalink
Browse files

Consolidate YAML files [part-12] (#9364)

* Consolidate YAML files [part-12]

Relocate YAML files referenced by the accessing application topic
and the rest of cluster administration.

* Adjust json shortcodes.
  • Loading branch information...
tengqm authored and k8s-ci-robot committed Jul 4, 2018
1 parent aed6732 commit ea6004bd4f0eea57f40a7c1790d9252d6c80e5d0
Showing with 120 additions and 253 deletions.
  1. +2 −2 content/en/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume.md
  2. +8 −8 content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md
  3. +0 −4 content/en/docs/tasks/access-application-cluster/hello/Dockerfile
  4. +0 −7 content/en/docs/tasks/access-application-cluster/hello/README
  5. +0 −74 content/en/docs/tasks/access-application-cluster/hello/main.go
  6. +0 −33 content/en/docs/tasks/access-application-cluster/redis-master.yaml
  7. +5 −5 content/en/docs/tasks/administer-cluster/configure-multiple-schedulers.md
  8. +2 −2 content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md
  9. +1 −1 content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md
  10. +6 −6 content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md
  11. +4 −4 content/en/docs/tasks/administer-cluster/namespaces.md
  12. +2 −2 content/en/docs/tasks/administer-cluster/running-cloud-controller.md
  13. 0 ...-cluster/cloud-controller-manager-daemonset-example.yaml → examples/admin/cloud/ccm-example.yaml}
  14. 0 ...rsistent-volume-label-initializer-config.yaml → examples/admin/cloud/pvl-initializer-config.yaml}
  15. 0 content/en/{docs/tasks/administer-cluster → examples/admin/dns}/busybox.yaml
  16. 0 content/en/{docs/tasks/administer-cluster → examples/admin/dns}/dns-horizontal-autoscaler.yaml
  17. 0 content/en/{docs/tasks/administer-cluster → examples/admin}/namespace-dev.json
  18. 0 content/en/{docs/tasks/administer-cluster → examples/admin}/namespace-prod.json
  19. 0 content/en/{docs/tasks/administer-cluster → examples/admin/sched}/my-scheduler.yaml
  20. 0 content/en/{docs/tasks/administer-cluster → examples/admin/sched}/pod1.yaml
  21. 0 content/en/{docs/tasks/administer-cluster → examples/admin/sched}/pod2.yaml
  22. 0 content/en/{docs/tasks/administer-cluster → examples/admin/sched}/pod3.yaml
  23. 0 content/en/{docs/tasks/access-application-cluster → examples/pods}/two-container-pod.yaml
  24. 0 content/en/{docs/tasks/access-application-cluster/frontend → examples/service/access}/Dockerfile
  25. 0 content/en/{docs/tasks/access-application-cluster/frontend → examples/service/access}/frontend.conf
  26. 0 content/en/{docs/tasks/access-application-cluster → examples/service/access}/frontend.yaml
  27. 0 content/en/{docs/tasks/access-application-cluster → examples/service/access}/hello-service.yaml
  28. 0 content/en/{docs/tasks/access-application-cluster → examples/service/access}/hello.yaml
  29. +90 −105 test/examples_test.go
@@ -27,7 +27,7 @@ In this exercise, you create a Pod that runs two Containers. The two containers
share a Volume that they can use to communicate. Here is the configuration file
for the Pod:
{{< code file="two-container-pod.yaml" >}}
{{< codenew file="pods/two-container-pod.yaml" >}}
In the configuration file, you can see that the Pod has a Volume named
`shared-data`.
@@ -44,7 +44,7 @@ directory of the nginx server.
Create the Pod and the two Containers:
kubectl create -f https://k8s.io/docs/tasks/access-application-cluster/two-container-pod.yaml
kubectl create -f https://k8s.io/examples/pods/two-container-pod.yaml
View information about the Pod and the Containers:
@@ -43,12 +43,12 @@ frontend and backend are connected using a Kubernetes Service object.
The backend is a simple hello greeter microservice. Here is the configuration
file for the backend Deployment:
{{< code file="hello.yaml" >}}
{{< codenew file="service/access/hello.yaml" >}}
Create the backend Deployment:
```
kubectl create -f https://k8s.io/docs/tasks/access-application-cluster/hello.yaml
kubectl create -f https://k8s.io/examples/service/access/hello.yaml
```
View information about the backend Deployment:
@@ -103,15 +103,15 @@ selector labels to find the Pods that it routes traffic to.
First, explore the Service configuration file:
{{< code file="hello-service.yaml" >}}
{{< codenew file="service/access/hello-service.yaml" >}}
In the configuration file, you can see that the Service routes traffic to Pods
that have the labels `app: hello` and `tier: backend`.
Create the `hello` Service:
```
kubectl create -f https://k8s.io/docs/tasks/access-application-cluster/hello-service.yaml
kubectl create -f https://k8s.io/examples/service/access/hello-service.yaml
```
At this point, you have a backend Deployment running, and you have a
@@ -127,18 +127,18 @@ of the `name` field in the preceding Service configuration file.
The Pods in the frontend Deployment run an nginx image that is configured
to find the hello backend Service. Here is the nginx configuration file:
{{< code file="frontend/frontend.conf" >}}
{{< codenew file="service/access/frontend.conf" >}}
Similar to the backend, the frontend has a Deployment and a Service. The
configuration for the Service has `type: LoadBalancer`, which means that
the Service uses the default load balancer of your cloud provider.
{{< code file="frontend.yaml" >}}
{{< codenew file="service/access/frontend.yaml" >}}
Create the frontend Deployment and Service:
```
kubectl create -f https://k8s.io/docs/tasks/access-application-cluster/frontend.yaml
kubectl create -f https://k8s.io/examples/service/access/frontend.yaml
```
The output verifies that both resources were created:
@@ -149,7 +149,7 @@ service "frontend" created
```
**Note**: The nginx configuration is baked into the
[container image](/docs/tasks/access-application-cluster/frontend/Dockerfile).
[container image](/examples/service/access/Dockerfile).
A better way to do this would be to use a
[ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/), so
that you can change the configuration more easily.

This file was deleted.

Oops, something went wrong.

This file was deleted.

Oops, something went wrong.

This file was deleted.

Oops, something went wrong.

This file was deleted.

Oops, something went wrong.
@@ -73,7 +73,7 @@ for this example. A [Deployment](/docs/concepts/workloads/controllers/deployment
thereby making the scheduler resilient to failures. Here is the deployment
config. Save it as `my-scheduler.yaml`:
{{< code file="my-scheduler.yaml" >}}
{{< codenew file="admin/sched/my-scheduler.yaml" >}}
An important thing to note here is that the name of the scheduler specified as an
argument to the scheduler command in the container spec should be unique. This is the name that is matched against the value of the optional `spec.schedulerName` on pods, to determine whether this scheduler is responsible for scheduling a particular pod.
@@ -149,7 +149,7 @@ scheduler in that pod spec. Let's look at three examples.
- Pod spec without any scheduler name
{{< code file="pod1.yaml" >}}
{{< codenew file="admin/sched/pod1.yaml" >}}
When no scheduler name is supplied, the pod is automatically scheduled using the
default-scheduler.
@@ -162,7 +162,7 @@ kubectl create -f pod1.yaml
- Pod spec with `default-scheduler`
{{< code file="pod2.yaml" >}}
{{< codenew file="admin/sched/pod2.yaml" >}}
A scheduler is specified by supplying the scheduler name as a value to `spec.schedulerName`. In this case, we supply the name of the
default scheduler which is `default-scheduler`.
@@ -175,7 +175,7 @@ kubectl create -f pod2.yaml
- Pod spec with `my-scheduler`
{{< code file="pod3.yaml" >}}
{{< codenew file="admin/sched/pod3.yaml" >}}
In this case, we specify that this pod should be scheduled using the scheduler that we
deployed - `my-scheduler`. Note that the value of `spec.schedulerName` should match the name supplied to the scheduler
@@ -215,4 +215,4 @@ verify that the pods were scheduled by the desired schedulers.
kubectl get events
```
{{% /capture %}}
{{% /capture %}}
@@ -22,12 +22,12 @@ This page provides hints on diagnosing DNS problems.
Create a file named busybox.yaml with the following contents:
{{< code file="busybox.yaml" >}}
{{< codenew file="admin/dns/busybox.yaml" >}}
Then create a pod using this file and verify its status:
```shell
$ kubectl create -f busybox.yaml
$ kubectl create -f https://k8s.io/examples/admin/dns/busybox.yaml
pod "busybox" created
$ kubectl get pods busybox
@@ -94,7 +94,7 @@ container based on the `cluster-proportional-autoscaler-amd64` image.
Create a file named `dns-horizontal-autoscaler.yaml` with this content:
{{< code file="dns-horizontal-autoscaler.yaml" >}}
{{< codenew file="admin/dns/dns-horizontal-autoscaler.yaml" >}}
In the file, replace `<SCALE_TARGET>` with your scale target.
@@ -67,24 +67,24 @@ One pattern this organization could follow is to partition the Kubernetes cluste
Let's create two new namespaces to hold our work.
Use the file [`namespace-dev.json`](/docs/tasks/administer-cluster/namespace-dev.json) which describes a development namespace:
Use the file [`namespace-dev.json`](/examples/admin/namespace-dev.json) which describes a development namespace:
{{< code language="json" file="namespace-dev.json" >}}
{{< codenew language="json" file="admin/namespace-dev.json" >}}
Create the development namespace using kubectl.
```shell
$ kubectl create -f https://k8s.io/docs/tasks/administer-cluster/namespace-dev.json
$ kubectl create -f https://k8s.io/examples/admin/namespace-dev.json
```
Save the following contents into file [`namespace-prod.json`](/docs/tasks/administer-cluster/namespace-prod.json) which describes a production namespace:
Save the following contents into file [`namespace-prod.json`](/examples/admin/namespace-prod.json) which describes a production namespace:
{{< code language="json" file="namespace-prod.json" >}}
{{< codenew language="json" file="admin/namespace-prod.json" >}}
And then let's create the production namespace using kubectl.
```shell
$ kubectl create -f https://k8s.io/docs/tasks/administer-cluster/namespace-prod.json
$ kubectl create -f https://k8s.io/examples/admin/namespace-prod.json
```
To be sure things are right, let's list all of the namespaces in our cluster.
@@ -140,20 +140,20 @@ One pattern this organization could follow is to partition the Kubernetes cluste
Let's create two new namespaces to hold our work.
Use the file [`namespace-dev.json`](/docs/tasks/administer-cluster/namespace-dev.json) which describes a development namespace:
Use the file [`namespace-dev.json`](/examples/admin/namespace-dev.json) which describes a development namespace:
{{< code language="json" file="namespace-dev.json" >}}
{{< codenew language="json" file="admin/namespace-dev.json" >}}
Create the development namespace using kubectl.
```shell
$ kubectl create -f docs/tasks/administer-cluster/namespace-dev.json
$ kubectl create -f https://k8s.io/examples/admin/namespace-dev.json
```
And then let's create the production namespace using kubectl.
```shell
$ kubectl create -f docs/tasks/administer-cluster/namespace-prod.json
$ kubectl create -f https://k8s.io/examples/admin/namespace-prod.json
```
To be sure things are right, list all of the namespaces in our cluster.
@@ -41,7 +41,7 @@ Successfully running cloud-controller-manager requires some changes to your clus
since the cloud controller manager takes over labeling persistent volumes.
* For the `cloud-controller-manager` to label persistent volumes, initializers will need to be enabled and an InitializerConifguration needs to be added to the system. Follow [these instructions](/docs/admin/extensible-admission-controllers.md#enable-initializers-alpha-feature) to enable initializers. Use the following YAML to create the InitializerConfiguration:
{{< code file="persistent-volume-label-initializer-config.yaml" >}}
{{< codenew file="admin/cloud/pvl-initializer-config.yaml" >}}
Keep in mind that setting up your cluster to use cloud controller manager will change your cluster behaviour in a few ways:
@@ -71,7 +71,7 @@ For cloud controller managers not in Kubernetes core, you can find the respectiv
For providers already in Kubernetes core, you can run the in-tree cloud controller manager as a Daemonset in your cluster, use the following as a guideline:
{{< code file="cloud-controller-manager-daemonset-example.yaml" >}}
{{< codenew file="admin/cloud/ccm-example.yaml" >}}
## Limitations
Oops, something went wrong.

0 comments on commit ea6004b

Please sign in to comment.