Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/main' into dev-1.24
Browse files Browse the repository at this point in the history
  • Loading branch information
nate-double-u committed Feb 15, 2022
2 parents c3c3b1a + a04f1c9 commit 8b9e77d
Show file tree
Hide file tree
Showing 82 changed files with 4,931 additions and 1,701 deletions.
5 changes: 5 additions & 0 deletions assets/scss/_custom.scss
Original file line number Diff line number Diff line change
Expand Up @@ -402,6 +402,11 @@ body {
color: #000;
}

.deprecation-warning.outdated-blog, .pageinfo.deprecation-warning.outdated-blog {
background-color: $blue;
color: $white;
}

body.td-home .deprecation-warning, body.td-blog .deprecation-warning, body.td-documentation .deprecation-warning {
border-radius: 3px;
}
Expand Down
4 changes: 3 additions & 1 deletion content/en/docs/concepts/cluster-administration/logging.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,9 @@ weight: 60
Application logs can help you understand what is happening inside your application. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications have some kind of logging mechanism. Likewise, container engines are designed to support logging. The easiest and most adopted logging method for containerized applications is writing to standard output and standard error streams.

However, the native functionality provided by a container engine or runtime is usually not enough for a complete logging solution.

For example, you may want to access your application's logs if a container crashes, a pod gets evicted, or a node dies.

In a cluster, logs should have a separate storage and lifecycle independent of nodes, pods, or containers. This concept is called _cluster-level logging_.

<!-- body -->
Expand Down Expand Up @@ -141,7 +143,7 @@ as a `DaemonSet`.

Node-level logging creates only one agent per node and doesn't require any changes to the applications running on the node.

Containers write stdout and stderr, but with no agreed format. A node-level agent collects these logs and forwards them for aggregation.
Containers write to stdout and stderr, but with no agreed format. A node-level agent collects these logs and forwards them for aggregation.

### Using a sidecar container with the logging agent {#sidecar-container-with-logging-agent}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -801,6 +801,6 @@ memory limit (and possibly request) for that container.
* Get hands-on experience [assigning CPU resources to containers and Pods](/docs/tasks/configure-pod-container/assign-cpu-resource/).
* Read how the API reference defines a [container](/docs/reference/kubernetes-api/workload-resources/pod-v1/#Container)
and its [resource requirements](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#resources)
* Read about [project quotas](https://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html) in XFS
* Read about [project quotas](https://xfs.org/index.php/XFS_FAQ#Q:_Quota:_Do_quotas_work_on_XFS.3F) in XFS
* Read more about the [kube-scheduler configuration reference (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/)

26 changes: 13 additions & 13 deletions content/en/docs/concepts/containers/container-lifecycle-hooks.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,22 +105,22 @@ The logs for a Hook handler are not exposed in Pod events.
If a handler fails for some reason, it broadcasts an event.
For `PostStart`, this is the `FailedPostStartHook` event,
and for `PreStop`, this is the `FailedPreStopHook` event.
You can see these events by running `kubectl describe pod <pod_name>`.
Here is some example output of events from running this command:
To generate a failed `FailedPreStopHook` event yourself, modify the [lifecycle-events.yaml](https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/pods/lifecycle-events.yaml) file to change the postStart command to "badcommand" and apply it.
Here is some example output of the resulting events you see from running `kubectl describe pod lifecycle-demo`:

```
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 {default-scheduler } Normal Scheduled Successfully assigned test-1730497541-cq1d2 to gke-test-cluster-default-pool-a07e5d30-siqd
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulling pulling image "test:1.0"
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Created Created container with docker id 5c6a256a2567; Security:[seccomp=unconfined]
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulled Successfully pulled image "test:1.0"
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Started Started container with docker id 5c6a256a2567
38s 38s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 5c6a256a2567: PostStart handler: Error executing in Docker Container: 1
37s 37s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 8df9fdfd7054: PostStart handler: Error executing in Docker Container: 1
38s 37s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "main" with RunContainerError: "PostStart handler: Error executing in Docker Container: 1"
1m 22s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Warning FailedPostStartHook
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7s default-scheduler Successfully assigned default/lifecycle-demo to ip-XXX-XXX-XX-XX.us-east-2...
Normal Pulled 6s kubelet Successfully pulled image "nginx" in 229.604315ms
Normal Pulling 4s (x2 over 6s) kubelet Pulling image "nginx"
Normal Created 4s (x2 over 5s) kubelet Created container lifecycle-demo-container
Normal Started 4s (x2 over 5s) kubelet Started container lifecycle-demo-container
Warning FailedPostStartHook 4s (x2 over 5s) kubelet Exec lifecycle hook ([badcommand]) for Container "lifecycle-demo-container" in Pod "lifecycle-demo_default(30229739-9651-4e5a-9a32-a8f1688862db)" failed - error: command 'badcommand' exited with 126: , message: "OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: \"badcommand\": executable file not found in $PATH: unknown\r\n"
Normal Killing 4s (x2 over 5s) kubelet FailedPostStartHook
Normal Pulled 4s kubelet Successfully pulled image "nginx" in 215.66395ms
Warning BackOff 2s (x2 over 3s) kubelet Back-off restarting failed container
```


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,18 +21,21 @@ your own.

When you create a resource using a manifest file, you can specify finalizers in
the `metadata.finalizers` field. When you attempt to delete the resource, the
controller that manages it notices the values in the `finalizers` field and does
the following:
API server handling the delete request notices the values in the `finalizers` field
and does the following:

* Modifies the object to add a `metadata.deletionTimestamp` field with the
time you started the deletion.
* Marks the object as read-only until its `metadata.finalizers` field is empty.
* Prevents the object from being removed until its `metadata.finalizers` field is empty.
* Returns a `202` status code (HTTP "Accepted")

The controller managing that finalizer notices the update to the object setting the
`metadata.deletionTimestamp`, indicating deletion of the object has been requested.
The controller then attempts to satisfy the requirements of the finalizers
specified for that resource. Each time a finalizer condition is satisfied, the
controller removes that key from the resource's `finalizers` field. When the
field is empty, garbage collection continues. You can also use finalizers to
prevent deletion of unmanaged resources.
`finalizers` field is emptied, an object with a `deletionTimestamp` field set
is automatically deleted. You can also use finalizers to prevent deletion of unmanaged resources.

A common example of a finalizer is `kubernetes.io/pv-protection`, which prevents
accidental deletion of `PersistentVolume` objects. When a `PersistentVolume`
Expand Down Expand Up @@ -63,16 +66,18 @@ Kubernetes also processes finalizers when it identifies owner references on a
resource targeted for deletion.

In some situations, finalizers can block the deletion of dependent objects,
which can cause the targeted owner object to remain in a read-only state for
which can cause the targeted owner object to remain for
longer than expected without being fully deleted. In these situations, you
should check finalizers and owner references on the target owner and dependent
objects to troubleshoot the cause.

{{<note>}}
In cases where objects are stuck in a deleting state, try to avoid manually
In cases where objects are stuck in a deleting state, avoid manually
removing finalizers to allow deletion to continue. Finalizers are usually added
to resources for a reason, so forcefully removing them can lead to issues in
your cluster.
your cluster. This should only be done when the purpose of the finalizer is
understood and is accomplished in another way (for example, manually cleaning
up some dependent object).
{{</note>}}

## {{% heading "whatsnext" %}}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet
is an ingress controller driving [Kong Gateway](https://konghq.com/kong/).
* The [NGINX Ingress Controller for Kubernetes](https://www.nginx.com/products/nginx-ingress-controller/)
works with the [NGINX](https://www.nginx.com/resources/glossary/nginx/) webserver (as a proxy).
* The [Pomerium Ingress Controller](https://www.pomerium.com/docs/k8s/ingress.html) is based on [Pomerium](https://pomerium.com/), which offers context-aware access policy.
* [Skipper](https://opensource.zalando.com/skipper/kubernetes/ingress-controller/) HTTP router and reverse proxy for service composition, including use cases like Kubernetes Ingress, designed as a library to build your custom proxy.
* The [Traefik Kubernetes Ingress provider](https://doc.traefik.io/traefik/providers/kubernetes-ingress/) is an
ingress controller for the [Traefik](https://traefik.io/traefik/) proxy.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ Routing". When calculating the endpoints for a {{< glossary_tooltip term_id="Ser
the EndpointSlice controller considers the topology (region and zone) of each endpoint
and populates the hints field to allocate it to a zone.
Cluster components such as the {{< glossary_tooltip term_id="kube-proxy" text="kube-proxy" >}}
can then consume those hints, and use them to influence how traffic to is routed
can then consume those hints, and use them to influence how the traffic is routed
(favoring topologically closer endpoints).

## Using Topology Aware Hints
Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/concepts/workloads/controllers/job.md
Original file line number Diff line number Diff line change
Expand Up @@ -308,7 +308,7 @@ cleaned up by CronJobs based on the specified capacity-based cleanup policy.

### TTL mechanism for finished Jobs

{{< feature-state for_k8s_version="v1.21" state="beta" >}}
{{< feature-state for_k8s_version="v1.23" state="stable" >}}

Another way to clean up finished Jobs (either `Complete` or `Failed`)
automatically is to use a TTL mechanism provided by a
Expand Down
4 changes: 2 additions & 2 deletions content/en/docs/contribute/review/reviewing-prs.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ Before you start a review:

## Review process

In general, review pull requests for content and style in English. The figure below outlines the steps for the review process. The details for each step follow.
In general, review pull requests for content and style in English. Figure 1 outlines the steps for the review process. The details for each step follow.

<!-- See https://github.com/kubernetes/website/issues/28808 for live-editor URL to this figure -->
<!-- You can also cut/paste the mermaid code into the live editor at https://mermaid-js.github.io/mermaid-live-editor to play around with it -->
Expand Down Expand Up @@ -67,7 +67,7 @@ class S,T spacewhite
class third,fourth white
{{</ mermaid >}}

***Figure - Review process steps***
Figure 1. Review process steps.

1. Go to
[https://github.com/kubernetes/website/pulls](https://github.com/kubernetes/website/pulls).
Expand Down

0 comments on commit 8b9e77d

Please sign in to comment.