Skip to content

Commit

Permalink
tweak line wrappings in storage/
Browse files Browse the repository at this point in the history
  • Loading branch information
windsonsea committed Jun 1, 2023
1 parent 903aca3 commit 2886944
Show file tree
Hide file tree
Showing 3 changed files with 71 additions and 36 deletions.
42 changes: 32 additions & 10 deletions content/en/docs/concepts/storage/volume-health-monitoring.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,24 +13,46 @@ weight: 100

{{< feature-state for_k8s_version="v1.21" state="alpha" >}}

{{< glossary_tooltip text="CSI" term_id="csi" >}} volume health monitoring allows CSI Drivers to detect abnormal volume conditions from the underlying storage systems and report them as events on {{< glossary_tooltip text="PVCs" term_id="persistent-volume-claim" >}} or {{< glossary_tooltip text="Pods" term_id="pod" >}}.
{{< glossary_tooltip text="CSI" term_id="csi" >}} volume health monitoring allows
CSI Drivers to detect abnormal volume conditions from the underlying storage systems
and report them as events on {{< glossary_tooltip text="PVCs" term_id="persistent-volume-claim" >}}
or {{< glossary_tooltip text="Pods" term_id="pod" >}}.

<!-- body -->

## Volume health monitoring

Kubernetes _volume health monitoring_ is part of how Kubernetes implements the Container Storage Interface (CSI). Volume health monitoring feature is implemented in two components: an External Health Monitor controller, and the {{< glossary_tooltip term_id="kubelet" text="kubelet" >}}.

If a CSI Driver supports Volume Health Monitoring feature from the controller side, an event will be reported on the related {{< glossary_tooltip text="PersistentVolumeClaim" term_id="persistent-volume-claim" >}} (PVC) when an abnormal volume condition is detected on a CSI volume.

The External Health Monitor {{< glossary_tooltip text="controller" term_id="controller" >}} also watches for node failure events. You can enable node failure monitoring by setting the `enable-node-watcher` flag to true. When the external health monitor detects a node failure event, the controller reports an Event will be reported on the PVC to indicate that pods using this PVC are on a failed node.

If a CSI Driver supports Volume Health Monitoring feature from the node side, an Event will be reported on every Pod using the PVC when an abnormal volume condition is detected on a CSI volume. In addition, Volume Health information is exposed as Kubelet VolumeStats metrics. A new metric kubelet_volume_stats_health_status_abnormal is added. This metric includes two labels: `namespace` and `persistentvolumeclaim`. The count is either 1 or 0. 1 indicates the volume is unhealthy, 0 indicates volume is healthy. For more information, please check [KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1432-volume-health-monitor#kubelet-metrics-changes).
Kubernetes _volume health monitoring_ is part of how Kubernetes implements the
Container Storage Interface (CSI). Volume health monitoring feature is implemented
in two components: an External Health Monitor controller, and the
{{< glossary_tooltip term_id="kubelet" text="kubelet" >}}.

If a CSI Driver supports Volume Health Monitoring feature from the controller side,
an event will be reported on the related
{{< glossary_tooltip text="PersistentVolumeClaim" term_id="persistent-volume-claim" >}} (PVC)
when an abnormal volume condition is detected on a CSI volume.

The External Health Monitor {{< glossary_tooltip text="controller" term_id="controller" >}}
also watches for node failure events. You can enable node failure monitoring by setting
the `enable-node-watcher` flag to true. When the external health monitor detects a node
failure event, the controller reports an Event will be reported on the PVC to indicate
that pods using this PVC are on a failed node.

If a CSI Driver supports Volume Health Monitoring feature from the node side,
an Event will be reported on every Pod using the PVC when an abnormal volume
condition is detected on a CSI volume. In addition, Volume Health information
is exposed as Kubelet VolumeStats metrics. A new metric kubelet_volume_stats_health_status_abnormal
is added. This metric includes two labels: `namespace` and `persistentvolumeclaim`.
The count is either 1 or 0. 1 indicates the volume is unhealthy, 0 indicates volume
is healthy. For more information, please check
[KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1432-volume-health-monitor#kubelet-metrics-changes).

{{< note >}}
You need to enable the `CSIVolumeHealth` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to use this feature from the node side.
You need to enable the `CSIVolumeHealth` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
to use this feature from the node side.
{{< /note >}}

## {{% heading "whatsnext" %}}

See the [CSI driver documentation](https://kubernetes-csi.github.io/docs/drivers.html) to find out which CSI drivers have implemented this feature.
See the [CSI driver documentation](https://kubernetes-csi.github.io/docs/drivers.html)
to find out which CSI drivers have implemented this feature.
48 changes: 30 additions & 18 deletions content/en/docs/concepts/storage/volume-pvc-datasource.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,36 +11,43 @@ weight: 70

<!-- overview -->

This document describes the concept of cloning existing CSI Volumes in Kubernetes. Familiarity with [Volumes](/docs/concepts/storage/volumes) is suggested.



This document describes the concept of cloning existing CSI Volumes in Kubernetes.
Familiarity with [Volumes](/docs/concepts/storage/volumes) is suggested.

<!-- body -->

## Introduction

The {{< glossary_tooltip text="CSI" term_id="csi" >}} Volume Cloning feature adds support for specifying existing {{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}s in the `dataSource` field to indicate a user would like to clone a {{< glossary_tooltip term_id="volume" >}}.
The {{< glossary_tooltip text="CSI" term_id="csi" >}} Volume Cloning feature adds
support for specifying existing {{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}s
in the `dataSource` field to indicate a user would like to clone a {{< glossary_tooltip term_id="volume" >}}.

A Clone is defined as a duplicate of an existing Kubernetes Volume that can be consumed as any standard Volume would be. The only difference is that upon provisioning, rather than creating a "new" empty Volume, the back end device creates an exact duplicate of the specified Volume.
A Clone is defined as a duplicate of an existing Kubernetes Volume that can be
consumed as any standard Volume would be. The only difference is that upon
provisioning, rather than creating a "new" empty Volume, the back end device
creates an exact duplicate of the specified Volume.

The implementation of cloning, from the perspective of the Kubernetes API, adds the ability to specify an existing PVC as a dataSource during new PVC creation. The source PVC must be bound and available (not in use).
The implementation of cloning, from the perspective of the Kubernetes API, adds
the ability to specify an existing PVC as a dataSource during new PVC creation.
The source PVC must be bound and available (not in use).

Users need to be aware of the following when using this feature:

* Cloning support (`VolumePVCDataSource`) is only available for CSI drivers.
* Cloning support is only available for dynamic provisioners.
* CSI drivers may or may not have implemented the volume cloning functionality.
* You can only clone a PVC when it exists in the same namespace as the destination PVC (source and destination must be in the same namespace).
* You can only clone a PVC when it exists in the same namespace as the destination PVC
(source and destination must be in the same namespace).
* Cloning is supported with a different Storage Class.
- Destination volume can be the same or a different storage class as the source.
- Default storage class can be used and storageClassName omitted in the spec.
* Cloning can only be performed between two volumes that use the same VolumeMode setting (if you request a block mode volume, the source MUST also be block mode)

- Destination volume can be the same or a different storage class as the source.
- Default storage class can be used and storageClassName omitted in the spec.
* Cloning can only be performed between two volumes that use the same VolumeMode setting
(if you request a block mode volume, the source MUST also be block mode)

## Provisioning

Clones are provisioned like any other PVC with the exception of adding a dataSource that references an existing PVC in the same namespace.
Clones are provisioned like any other PVC with the exception of adding a dataSource
that references an existing PVC in the same namespace.

```yaml
apiVersion: v1
Expand All @@ -61,13 +68,18 @@ spec:
```

{{< note >}}
You must specify a capacity value for `spec.resources.requests.storage`, and the value you specify must be the same or larger than the capacity of the source volume.
You must specify a capacity value for `spec.resources.requests.storage`, and the
value you specify must be the same or larger than the capacity of the source volume.
{{< /note >}}

The result is a new PVC with the name `clone-of-pvc-1` that has the exact same content as the specified source `pvc-1`.
The result is a new PVC with the name `clone-of-pvc-1` that has the exact same
content as the specified source `pvc-1`.

## Usage

Upon availability of the new PVC, the cloned PVC is consumed the same as other PVC. It's also expected at this point that the newly created PVC is an independent object. It can be consumed, cloned, snapshotted, or deleted independently and without consideration for it's original dataSource PVC. This also implies that the source is not linked in any way to the newly created clone, it may also be modified or deleted without affecting the newly created clone.


Upon availability of the new PVC, the cloned PVC is consumed the same as other PVC.
It's also expected at this point that the newly created PVC is an independent object.
It can be consumed, cloned, snapshotted, or deleted independently and without
consideration for it's original dataSource PVC. This also implies that the source
is not linked in any way to the newly created clone, it may also be modified or
deleted without affecting the newly created clone.
17 changes: 9 additions & 8 deletions content/en/docs/concepts/storage/volume-snapshot-classes.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,6 @@ This document describes the concept of VolumeSnapshotClass in Kubernetes. Famili
with [volume snapshots](/docs/concepts/storage/volume-snapshots/) and
[storage classes](/docs/concepts/storage/storage-classes) is suggested.




<!-- body -->

## Introduction
Expand All @@ -40,7 +37,8 @@ of a class when first creating VolumeSnapshotClass objects, and the objects cann
be updated once they are created.

{{< note >}}
Installation of the CRDs is the responsibility of the Kubernetes distribution. Without the required CRDs present, the creation of a VolumeSnapshotClass fails.
Installation of the CRDs is the responsibility of the Kubernetes distribution.
Without the required CRDs present, the creation of a VolumeSnapshotClass fails.
{{< /note >}}

```yaml
Expand Down Expand Up @@ -76,14 +74,17 @@ used for provisioning VolumeSnapshots. This field must be specified.

### DeletionPolicy

Volume snapshot classes have a deletionPolicy. It enables you to configure what happens to a VolumeSnapshotContent when the VolumeSnapshot object it is bound to is to be deleted. The deletionPolicy of a volume snapshot class can either be `Retain` or `Delete`. This field must be specified.
Volume snapshot classes have a deletionPolicy. It enables you to configure what
happens to a VolumeSnapshotContent when the VolumeSnapshot object it is bound to
is to be deleted. The deletionPolicy of a volume snapshot class can either be
`Retain` or `Delete`. This field must be specified.

If the deletionPolicy is `Delete`, then the underlying storage snapshot will be deleted along with the VolumeSnapshotContent object. If the deletionPolicy is `Retain`, then both the underlying snapshot and VolumeSnapshotContent remain.
If the deletionPolicy is `Delete`, then the underlying storage snapshot will be
deleted along with the VolumeSnapshotContent object. If the deletionPolicy is `Retain`,
then both the underlying snapshot and VolumeSnapshotContent remain.

## Parameters

Volume snapshot classes have parameters that describe volume snapshots belonging to
the volume snapshot class. Different parameters may be accepted depending on the
`driver`.


0 comments on commit 2886944

Please sign in to comment.