Skip to content

Commit

Permalink
clean up use of word: simply
Browse files Browse the repository at this point in the history
  • Loading branch information
kbhawkey committed Feb 7, 2021
1 parent 39edec0 commit 3fd6548
Show file tree
Hide file tree
Showing 38 changed files with 302 additions and 237 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ kubectl apply -f https://k8s.io/examples/application/nginx/

`kubectl` will read any files with suffixes `.yaml`, `.yml`, or `.json`.

It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same directory. If the tiers of your application bind to each other using DNS, then you can then simply deploy all of the components of your stack en masse.
It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same directory. If the tiers of your application bind to each other using DNS, you can deploy all of the components of your stack together.

A URL can also be specified as a configuration source, which is handy for deploying directly from configuration files checked into github:

Expand Down Expand Up @@ -265,7 +265,7 @@ For a more concrete example, check the [tutorial of deploying Ghost](https://git
## Updating labels

Sometimes existing pods and other resources need to be relabeled before creating new resources. This can be done with `kubectl label`.
For example, if you want to label all your nginx pods as frontend tier, simply run:
For example, if you want to label all your nginx pods as frontend tier, run:

```shell
kubectl label pods -l app=nginx tier=fe
Expand Down Expand Up @@ -411,7 +411,7 @@ and

## Disruptive updates

In some cases, you may need to update resource fields that cannot be updated once initialized, or you may just want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can simply modify your original configuration file:
In some cases, you may need to update resource fields that cannot be updated once initialized, or you may just want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can modify your original configuration file:

```shell
kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force
Expand Down Expand Up @@ -448,7 +448,7 @@ kubectl scale deployment my-nginx --current-replicas=1 --replicas=3
deployment.apps/my-nginx scaled
```

To update to version 1.16.1, simply change `.spec.template.spec.containers[0].image` from `nginx:1.14.2` to `nginx:1.16.1`, with the kubectl commands we learned above.
To update to version 1.16.1, change `.spec.template.spec.containers[0].image` from `nginx:1.14.2` to `nginx:1.16.1` using the previous kubectl commands.

```shell
kubectl edit deployment/my-nginx
Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/concepts/configuration/configmap.md
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@ The kubelet checks whether the mounted ConfigMap is fresh on every periodic sync
However, the kubelet uses its local cache for getting the current value of the ConfigMap.
The type of the cache is configurable using the `ConfigMapAndSecretChangeDetectionStrategy` field in
the [KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go).
A ConfigMap can be either propagated by watch (default), ttl-based, or simply redirecting
A ConfigMap can be either propagated by watch (default), ttl-based, or by redirecting
all requests directly to the API server.
As a result, the total delay from the moment when the ConfigMap is updated to the moment
when new keys are projected to the Pod can be as long as the kubelet sync period + cache
Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/concepts/configuration/secret.md
Original file line number Diff line number Diff line change
Expand Up @@ -669,7 +669,7 @@ The kubelet checks whether the mounted secret is fresh on every periodic sync.
However, the kubelet uses its local cache for getting the current value of the Secret.
The type of the cache is configurable using the `ConfigMapAndSecretChangeDetectionStrategy` field in
the [KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go).
A Secret can be either propagated by watch (default), ttl-based, or simply redirecting
A Secret can be either propagated by watch (default), ttl-based, or by redirecting
all requests directly to the API server.
As a result, the total delay from the moment when the Secret is updated to the moment
when new keys are projected to the Pod can be as long as the kubelet sync period + cache
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ Once a custom resource is installed, users can create and access its objects usi

## Custom controllers

On their own, custom resources simply let you store and retrieve structured data.
On their own, custom resources let you store and retrieve structured data.
When you combine a custom resource with a *custom controller*, custom resources
provide a true _declarative API_.

Expand Down Expand Up @@ -120,7 +120,7 @@ Kubernetes provides two ways to add custom resources to your cluster:

Kubernetes provides these two options to meet the needs of different users, so that neither ease of use nor flexibility is compromised.

Aggregated APIs are subordinate API servers that sit behind the primary API server, which acts as a proxy. This arrangement is called [API Aggregation](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) (AA). To users, it simply appears that the Kubernetes API is extended.
Aggregated APIs are subordinate API servers that sit behind the primary API server, which acts as a proxy. This arrangement is called [API Aggregation](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) (AA). To users, the Kubernetes API appears extended.

CRDs allow users to create new types of resources without adding another API server. You do not need to understand API Aggregation to use CRDs.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Network plugins in Kubernetes come in a few flavors:
The kubelet has a single default network plugin, and a default network common to the entire cluster. It probes for plugins when it starts up, remembers what it finds, and executes the selected plugin at appropriate times in the pod lifecycle (this is only true for Docker, as CRI manages its own CNI plugins). There are two Kubelet command line parameters to keep in mind when using plugins:

* `cni-bin-dir`: Kubelet probes this directory for plugins on startup
* `network-plugin`: The network plugin to use from `cni-bin-dir`. It must match the name reported by a plugin probed from the plugin directory. For CNI plugins, this is simply "cni".
* `network-plugin`: The network plugin to use from `cni-bin-dir`. It must match the name reported by a plugin probed from the plugin directory. For CNI plugins, this is `cni`.

## Network Plugin Requirements

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Fortunately, there is a cloud provider that offers message queuing as a managed

A cluster operator can setup Service Catalog and use it to communicate with the cloud provider's service broker to provision an instance of the message queuing service and make it available to the application within the Kubernetes cluster.
The application developer therefore does not need to be concerned with the implementation details or management of the message queue.
The application can simply use it as a service.
The application can access the message queue as a service.

## Architecture

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ For both equality-based and set-based conditions there is no logical _OR_ (`||`)
### _Equality-based_ requirement

_Equality-_ or _inequality-based_ requirements allow filtering by label keys and values. Matching objects must satisfy all of the specified label constraints, though they may have additional labels as well.
Three kinds of operators are admitted `=`,`==`,`!=`. The first two represent _equality_ (and are simply synonyms), while the latter represents _inequality_. For example:
Three kinds of operators are admitted `=`,`==`,`!=`. The first two represent _equality_ (and are synonyms), while the latter represents _inequality_. For example:

```
environment = production
Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/concepts/policy/pod-security-policy.md
Original file line number Diff line number Diff line change
Expand Up @@ -197,7 +197,7 @@ alias kubectl-user='kubectl --as=system:serviceaccount:psp-example:fake-user -n
### Create a policy and a pod

Define the example PodSecurityPolicy object in a file. This is a policy that
simply prevents the creation of privileged pods.
prevents the creation of privileged pods.
The name of a PodSecurityPolicy object must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -261,7 +261,7 @@ for performance and security reasons, there are some constraints on topologyKey:
and `preferredDuringSchedulingIgnoredDuringExecution`.
2. For pod anti-affinity, empty `topologyKey` is also not allowed in both `requiredDuringSchedulingIgnoredDuringExecution`
and `preferredDuringSchedulingIgnoredDuringExecution`.
3. For `requiredDuringSchedulingIgnoredDuringExecution` pod anti-affinity, the admission controller `LimitPodHardAntiAffinityTopology` was introduced to limit `topologyKey` to `kubernetes.io/hostname`. If you want to make it available for custom topologies, you may modify the admission controller, or simply disable it.
3. For `requiredDuringSchedulingIgnoredDuringExecution` pod anti-affinity, the admission controller `LimitPodHardAntiAffinityTopology` was introduced to limit `topologyKey` to `kubernetes.io/hostname`. If you want to make it available for custom topologies, you may modify the admission controller, or disable it.
4. Except for the above cases, the `topologyKey` can be any legal label-key.

In addition to `labelSelector` and `topologyKey`, you can optionally specify a list `namespaces`
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ value being calculated based on the cluster size. There is also a hardcoded
minimum value of 50 nodes.

{{< note >}}In clusters with less than 50 feasible nodes, the scheduler still
checks all the nodes, simply because there are not enough feasible nodes to stop
checks all the nodes because there are not enough feasible nodes to stop
the scheduler's search early.

In a small cluster, if you set a low value for `percentageOfNodesToScore`, your
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,9 @@ assigned a DNS name. By default, a client Pod's DNS search list will
include the Pod's own namespace and the cluster's default domain. This is best
illustrated by example:

Assume a Service named `foo` in the Kubernetes namespace `bar`. A Pod running
in namespace `bar` can look up this service by simply doing a DNS query for
`foo`. A Pod running in namespace `quux` can look up this service by doing a
Assume a Service named `foo` in the Kubernetes namespace `bar`. A Pod running
in namespace `bar` can look up this service by querying a DNS service for
`foo`. A Pod running in namespace `quux` can look up this service by doing a
DNS query for `foo.bar`.

The following sections detail the supported record types and layout that is
Expand Down
4 changes: 2 additions & 2 deletions content/en/docs/concepts/services-networking/service.md
Original file line number Diff line number Diff line change
Expand Up @@ -430,7 +430,7 @@ Services by their DNS name.
For example, if you have a Service called `my-service` in a Kubernetes
namespace `my-ns`, the control plane and the DNS Service acting together
create a DNS record for `my-service.my-ns`. Pods in the `my-ns` namespace
should be able to find it by simply doing a name lookup for `my-service`
should be able to find the service by doing a name lookup for `my-service`
(`my-service.my-ns` would also work).

Pods in other namespaces must qualify the name as `my-service.my-ns`. These names
Expand Down Expand Up @@ -1163,7 +1163,7 @@ rule kicks in, and redirects the packets to the proxy's own port.
The "Service proxy" chooses a backend, and starts proxying traffic from the client to the backend.

This means that Service owners can choose any port they want without risk of
collision. Clients can simply connect to an IP and port, without being aware
collision. Clients can connect to an IP and port, without being aware
of which Pods they are actually accessing.

#### iptables
Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/concepts/storage/persistent-volumes.md
Original file line number Diff line number Diff line change
Expand Up @@ -487,7 +487,7 @@ The following volume types support mount options:
* VsphereVolume
* iSCSI

Mount options are not validated, so mount will simply fail if one is invalid.
Mount options are not validated. If a mount option is invalid, the mount fails.

In the past, the annotation `volume.beta.kubernetes.io/mount-options` was used instead
of the `mountOptions` attribute. This annotation is still working; however,
Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/concepts/storage/storage-classes.md
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ mount options specified in the `mountOptions` field of the class.

If the volume plugin does not support mount options but mount options are
specified, provisioning will fail. Mount options are not validated on either
the class or PV, so mount of the PV will simply fail if one is invalid.
the class or PV. If a mount option is invalid, the PV mount fails.

### Volume Binding Mode

Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/concepts/storage/volume-pvc-datasource.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ The {{< glossary_tooltip text="CSI" term_id="csi" >}} Volume Cloning feature add

A Clone is defined as a duplicate of an existing Kubernetes Volume that can be consumed as any standard Volume would be. The only difference is that upon provisioning, rather than creating a "new" empty Volume, the back end device creates an exact duplicate of the specified Volume.

The implementation of cloning, from the perspective of the Kubernetes API, simply adds the ability to specify an existing PVC as a dataSource during new PVC creation. The source PVC must be bound and available (not in use).
The implementation of cloning, from the perspective of the Kubernetes API, adds the ability to specify an existing PVC as a dataSource during new PVC creation. The source PVC must be bound and available (not in use).

Users need to be aware of the following when using this feature:

Expand Down
25 changes: 16 additions & 9 deletions content/en/docs/concepts/workloads/controllers/deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ In this example:
* A Deployment named `nginx-deployment` is created, indicated by the `.metadata.name` field.
* The Deployment creates three replicated Pods, indicated by the `.spec.replicas` field.
* The `.spec.selector` field defines how the Deployment finds which Pods to manage.
In this case, you simply select a label that is defined in the Pod template (`app: nginx`).
In this case, you select a label that is defined in the Pod template (`app: nginx`).
However, more sophisticated selection rules are possible,
as long as the Pod template itself satisfies the rule.

Expand Down Expand Up @@ -171,13 +171,15 @@ Follow the steps given below to update your Deployment:
```shell
kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1
```
or simply use the following command:


or use the following command:

```shell
kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record
```

The output is similar to this:
The output is similar to:

```
deployment.apps/nginx-deployment image updated
```
Expand All @@ -188,7 +190,8 @@ Follow the steps given below to update your Deployment:
kubectl edit deployment.v1.apps/nginx-deployment
```

The output is similar to this:
The output is similar to:

```
deployment.apps/nginx-deployment edited
```
Expand All @@ -200,10 +203,13 @@ Follow the steps given below to update your Deployment:
```

The output is similar to this:

```
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
```

or

```
deployment "nginx-deployment" successfully rolled out
```
Expand All @@ -212,10 +218,11 @@ Get more details on your updated Deployment:

* After the rollout succeeds, you can view the Deployment by running `kubectl get deployments`.
The output is similar to this:
```
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 36s
```

```ini
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 36s
```

* Run `kubectl get rs` to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it
up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -180,16 +180,16 @@ delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). Kubectl wi
for it to delete each pod before deleting the ReplicationController itself. If this kubectl
command is interrupted, it can be restarted.

When using the REST API or go client library, you need to do the steps explicitly (scale replicas to
When using the REST API or Go client library, you need to do the steps explicitly (scale replicas to
0, wait for pod deletions, then delete the ReplicationController).

### Deleting just a ReplicationController
### Deleting only a ReplicationController

You can delete a ReplicationController without affecting any of its pods.

Using kubectl, specify the `--cascade=false` option to [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete).

When using the REST API or go client library, simply delete the ReplicationController object.
When using the REST API or Go client library, you can delete the ReplicationController object.

Once the original is deleted, you can create a new ReplicationController to replace it. As long
as the old and new `.spec.selector` are the same, then the new one will adopt the old pods.
Expand Down Expand Up @@ -240,7 +240,7 @@ Pods created by a ReplicationController are intended to be fungible and semantic

## Responsibilities of the ReplicationController

The ReplicationController simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](https://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.
The ReplicationController ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](https://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.

The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](https://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (for example, [spreading](https://issue.k8s.io/367#issuecomment-48428019)) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](https://issue.k8s.io/170)).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ Members can:

{{< note >}}
Using `/lgtm` triggers automation. If you want to provide non-binding
approval, simply commenting "LGTM" works too!
approval, commenting "LGTM" works too!
{{< /note >}}

- Use the `/hold` comment to block merging for a pull request
Expand Down
Loading

0 comments on commit 3fd6548

Please sign in to comment.