Skip to content

Commit

Permalink
update kubectl "$" format (#13256)
Browse files Browse the repository at this point in the history
  • Loading branch information
Liujingfang1 authored and k8s-ci-robot committed Jun 11, 2019
1 parent 0c03137 commit b3e3332
Show file tree
Hide file tree
Showing 9 changed files with 135 additions and 36 deletions.
5 changes: 4 additions & 1 deletion content/en/docs/concepts/configuration/assign-pod-node.md
Expand Up @@ -340,7 +340,10 @@ If we create the above two deployments, our three node cluster should look like
As you can see, all the 3 replicas of the `web-server` are automatically co-located with the cache as expected.

```
$ kubectl get pods -o wide
kubectl get pods -o wide
```
The output is similar to this:
```
NAME READY STATUS RESTARTS AGE IP NODE
redis-cache-1450370735-6dzlj 1/1 Running 0 8m 10.192.4.2 kube-node-3
redis-cache-1450370735-j2j96 1/1 Running 0 8m 10.192.2.2 kube-node-1
Expand Down
Expand Up @@ -240,8 +240,11 @@ options ndots:2 edns0

For IPv6 setup, search path and name server should be setup like this:

```shell
kubectl exec -it dns-example -- cat /etc/resolv.conf
```
$ kubectl exec -it dns-example -- cat /etc/resolv.conf
The output is similar to this:
```shell
nameserver fd00:79:30::a
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
Expand Down
Expand Up @@ -100,7 +100,10 @@ that just gets the name from each Pod in the returned list.
View the standard output of one of the pods:

```shell
$ kubectl logs $pods
kubectl logs $pods
```
The output is similar to this:
```shell
3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901
```

Expand Down
18 changes: 17 additions & 1 deletion content/en/docs/reference/setup-tools/kubeadm/kubeadm-join.md
Expand Up @@ -181,20 +181,33 @@ for a kubelet when a Bootstrap Token was used when authenticating. If you don't
automatically approve kubelet client certs, you can turn it off by executing this command:
```shell
$ kubectl delete clusterrolebinding kubeadm:node-autoapprove-bootstrap
kubectl delete clusterrolebinding kubeadm:node-autoapprove-bootstrap
```
After that, `kubeadm join` will block until the admin has manually approved the CSR in flight:
```shell
kubectl get csr
```
The output is similar to this:
```
NAME AGE REQUESTOR CONDITION
node-csr-c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ 18s system:bootstrap:878f07 Pending
```
```shell
kubectl certificate approve node-csr-c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ
```
The output is similar to this:
```
certificatesigningrequest "node-csr-c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ" approved
```
```shell
kubectl get csr
```
The output is similar to this:
```
NAME AGE REQUESTOR CONDITION
node-csr-c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ 1m system:bootstrap:878f07 Approved,Issued
```
Expand All @@ -213,6 +226,9 @@ it off regardless. Doing so will disable the ability to use the `--discovery-tok

```shell
kubectl -n kube-public get cm cluster-info -o yaml | grep "kubeconfig:" -A11 | grep "apiVersion" -A10 | sed "s/ //" | tee cluster-info.yaml
```
The output is similar to this:
```
apiVersion: v1
kind: Config
clusters:
Expand Down
5 changes: 4 additions & 1 deletion content/en/docs/setup/independent/create-cluster-kubeadm.md
Expand Up @@ -326,7 +326,10 @@ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.5/examples/
Once all Cilium pods are marked as `READY`, you start using your cluster.

```shell
$ kubectl get pods -n kube-system --selector=k8s-app=cilium
kubectl get pods -n kube-system --selector=k8s-app=cilium
```
The output is similar to this:
```
NAME READY STATUS RESTARTS AGE
cilium-drxkl 1/1 Running 0 18m
```
Expand Down
Expand Up @@ -123,10 +123,10 @@ The output is similar to this:

Using `jsonpath` approach:

```
$ APISERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')
$ TOKEN=$(kubectl get secret $(kubectl get serviceaccount default -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' | base64 --decode )
$ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
```shell
APISERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')
TOKEN=$(kubectl get secret $(kubectl get serviceaccount default -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' | base64 --decode )
curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
{
"kind": "APIVersions",
"versions": [
Expand Down
6 changes: 5 additions & 1 deletion content/en/docs/tasks/manage-gpus/scheduling-gpus.md
Expand Up @@ -174,7 +174,11 @@ For AMD GPUs, you can deploy [Node Labeller](https://github.com/RadeonOpenComput

Example result:

$ kubectl describe node cluster-node-23
```console
kubectl describe node cluster-node-23
```
The output is similar to:

Name: cluster-node-23
Roles: <none>
Labels: beta.amd.com/gpu.cu-count.64=1
Expand Down
4 changes: 2 additions & 2 deletions content/en/docs/test.md
Expand Up @@ -121,7 +121,7 @@ mind:
`Numbered` above.**

```bash
$ ls -l
ls -l
```

- And a sub-list after some block-level content. This is at the same
Expand All @@ -146,7 +146,7 @@ Tabs may also nest formatting styles.
1. Lists

```bash
$ echo 'Tab lists may contain code blocks!'
echo 'Tab lists may contain code blocks!'
```

{{% /tab %}}
Expand Down
115 changes: 91 additions & 24 deletions content/en/docs/tutorials/services/source-ip.md
Expand Up @@ -34,7 +34,10 @@ document. The examples use a small nginx webserver that echoes back the source
IP of requests it receives through an HTTP header. You can create it as follows:

```console
$ kubectl run source-ip-app --image=k8s.gcr.io/echoserver:1.4
kubectl run source-ip-app --image=k8s.gcr.io/echoserver:1.4
```
The output is:
```
deployment.apps/source-ip-app created
```

Expand All @@ -59,31 +62,49 @@ which is the default since Kubernetes 1.2. Kube-proxy exposes its mode through
a `proxyMode` endpoint:

```console
$ kubectl get nodes
kubectl get nodes
```
The output is similar to this:
```
NAME STATUS ROLES AGE VERSION
kubernetes-minion-group-6jst Ready <none> 2h v1.13.0
kubernetes-minion-group-cx31 Ready <none> 2h v1.13.0
kubernetes-minion-group-jj1t Ready <none> 2h v1.13.0

```
Get the proxy mode on one of the node
```console
kubernetes-minion-group-6jst $ curl localhost:10249/proxyMode
```
The output is:
```
iptables
```

You can test source IP preservation by creating a Service over the source IP app:

```console
$ kubectl expose deployment source-ip-app --name=clusterip --port=80 --target-port=8080
kubectl expose deployment source-ip-app --name=clusterip --port=80 --target-port=8080
```
The output is:
```
service/clusterip exposed

$ kubectl get svc clusterip
```
```console
kubectl get svc clusterip
```
The output is similar to:
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
clusterip ClusterIP 10.0.170.92 <none> 80/TCP 51s
```

And hitting the `ClusterIP` from a pod in the same cluster:

```console
$ kubectl run busybox -it --image=busybox --restart=Never --rm
kubectl run busybox -it --image=busybox --restart=Never --rm
```
The output is similar to this:
```
Waiting for pod default/busybox to be running, status is Pending, pod ready: false
If you don't see a command prompt, try pressing enter.
Expand Down Expand Up @@ -115,11 +136,16 @@ As of Kubernetes 1.5, packets sent to Services with [Type=NodePort](/docs/concep
are source NAT'd by default. You can test this by creating a `NodePort` Service:

```console
$ kubectl expose deployment source-ip-app --name=nodeport --port=80 --target-port=8080 --type=NodePort
kubectl expose deployment source-ip-app --name=nodeport --port=80 --target-port=8080 --type=NodePort
```
The output is:
```
service/nodeport exposed
```

$ NODEPORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services nodeport)
$ NODES=$(kubectl get nodes -o jsonpath='{ $.items[*].status.addresses[?(@.type=="IPAddress")].address }')
```console
NODEPORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services nodeport)
NODES=$(kubectl get nodes -o jsonpath='{ $.items[*].status.addresses[?(@.type=="ExternalIP")].address }')
```

If you're running on a cloudprovider, you may need to open up a firewall-rule
Expand All @@ -128,7 +154,10 @@ Now you can try reaching the Service from outside the cluster through the node
port allocated above.

```console
$ for node in $NODES; do curl -s $node:$NODEPORT | grep -i client_address; done
for node in $NODES; do curl -s $node:$NODEPORT | grep -i client_address; done
```
The output is similar to:
```
client_address=10.180.1.1
client_address=10.240.0.5
client_address=10.240.0.3
Expand Down Expand Up @@ -170,14 +199,20 @@ packet that make it through to the endpoint.
Set the `service.spec.externalTrafficPolicy` field as follows:

```console
$ kubectl patch svc nodeport -p '{"spec":{"externalTrafficPolicy":"Local"}}'
kubectl patch svc nodeport -p '{"spec":{"externalTrafficPolicy":"Local"}}'
```
The output is:
```
service/nodeport patched
```

Now, re-run the test:

```console
$ for node in $NODES; do curl --connect-timeout 1 -s $node:$NODEPORT | grep -i client_address; done
for node in $NODES; do curl --connect-timeout 1 -s $node:$NODEPORT | grep -i client_address; done
```
The output is:
```
client_address=104.132.1.79
```

Expand Down Expand Up @@ -219,14 +254,28 @@ described in the previous section).
You can test this by exposing the source-ip-app through a loadbalancer

```console
$ kubectl expose deployment source-ip-app --name=loadbalancer --port=80 --target-port=8080 --type=LoadBalancer
kubectl expose deployment source-ip-app --name=loadbalancer --port=80 --target-port=8080 --type=LoadBalancer
```
The output is:
```
service/loadbalancer exposed
```

$ kubectl get svc loadbalancer
Print IPs of the Service:
```console
kubectl get svc loadbalancer
```
The output is similar to this:
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
loadbalancer LoadBalancer 10.0.65.118 104.198.149.140 80/TCP 5m
```

$ curl 104.198.149.140
```console
curl 104.198.149.140
```
The output is similar to this:
```
CLIENT VALUES:
client_address=10.240.0.5
...
Expand Down Expand Up @@ -254,29 +303,44 @@ health check ---> node 1 node 2 <--- health check
You can test this by setting the annotation:

```console
$ kubectl patch svc loadbalancer -p '{"spec":{"externalTrafficPolicy":"Local"}}'
kubectl patch svc loadbalancer -p '{"spec":{"externalTrafficPolicy":"Local"}}'
```

You should immediately see the `service.spec.healthCheckNodePort` field allocated
by Kubernetes:

```console
$ kubectl get svc loadbalancer -o yaml | grep -i healthCheckNodePort
kubectl get svc loadbalancer -o yaml | grep -i healthCheckNodePort
```
The output is similar to this:
```
healthCheckNodePort: 32122
```

The `service.spec.healthCheckNodePort` field points to a port on every node
serving the health check at `/healthz`. You can test this:

```console
kubectl get pod -o wide -l run=source-ip-app
```
The output is similar to this:
```
$ kubectl get pod -o wide -l run=source-ip-app
NAME READY STATUS RESTARTS AGE IP NODE
source-ip-app-826191075-qehz4 1/1 Running 0 20h 10.180.1.136 kubernetes-minion-group-6jst
```
Curl the `/healthz` endpoint on different nodes.
```console
kubernetes-minion-group-6jst $ curl localhost:32122/healthz
```
The output is similar to this:
```
1 Service Endpoints found
```
```console
kubernetes-minion-group-jj1t $ curl localhost:32122/healthz
```
The output is similar to this:
```
No Service Endpoints Found
```

Expand All @@ -286,7 +350,10 @@ pointing to this port/path on each node. Wait about 10 seconds for the 2 nodes
without endpoints to fail health checks, then curl the lb ip:

```console
$ curl 104.198.149.140
curl 104.198.149.140
```
The output is similar to this:
```
CLIENT VALUES:
client_address=104.132.1.79
...
Expand Down Expand Up @@ -322,13 +389,13 @@ the `service.spec.healthCheckNodePort` field on the Service.
Delete the Services:

```console
$ kubectl delete svc -l run=source-ip-app
kubectl delete svc -l run=source-ip-app
```

Delete the Deployment, ReplicaSet and Pod:

```console
$ kubectl delete deployment source-ip-app
kubectl delete deployment source-ip-app
```

{{% /capture %}}
Expand Down

0 comments on commit b3e3332

Please sign in to comment.