Skip to content

Commit

Permalink
Fix #1322, update information about changing Fluentd persistence
Browse files Browse the repository at this point in the history
  • Loading branch information
kkujawa-sumo committed Jan 13, 2021
1 parent bb91143 commit a0d98c0
Show file tree
Hide file tree
Showing 4 changed files with 363 additions and 9 deletions.
12 changes: 5 additions & 7 deletions deploy/docs/Best_Practices.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,11 +108,13 @@ The buffer configuration can be set in the `values.yaml` file under the `fluentd
fluentd:
## Persist data to a persistent volume; When enabled, fluentd uses the file buffer instead of memory buffer.
persistence:
## After setting the value to true, run the helm upgrade command with the --force flag.
## After changing this value please follow steps described in:
## https://github.com/SumoLogic/sumologic-kubernetes-collection/blob/release-v1.3/deploy/docs/FluentdPersistence.md
enabled: true
```

Additional buffering and flushing parameters can be added in the `extraConf`, in the `fluentd` buffer section.

```yaml
fluentd:
## Option to specify the Fluentd buffer as file/memory.
Expand All @@ -124,13 +126,9 @@ fluentd:

We have defined several file paths where the buffer chunks are stored.

Once the config has been modified in the `values.yaml` file you need to run the `helm upgrade` command to apply the changes.

```bash
$ helm upgrade collection sumologic/sumologic --reuse-values -f values.yaml --force
```
After changing Fluentd persistence setting (enable or disable) follow steps described in [Fluentd Persistence](FluentdPersistence.md).

See the following links to official Fluentd buffer documentation:
See the following links to official Fluentd buffer documentation:
- https://docs.fluentd.org/configuration/buffer-section
- https://docs.fluentd.org/buffer/file

Expand Down
355 changes: 355 additions & 0 deletions deploy/docs/FluentdPersistence.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,355 @@
# Fluentd persistence

When the Fluentd persistence setting is to be changed (enabled or disabled)
it is required to recreate or delete existing Fluentd StatefulSet,
as it is not possible to add/remove `volumeClaimTemplate` for StatefulSet.

**Note:** The below commands are using `yq` in version `3.4.0` <= `x` < `4.0.0`.

## Enabling Fluentd persistence

To enable the Fluentd persistence modify `values.yaml` file under the `fluentd` key as follows:

```yaml
fluentd:
persistence:
enabled: true
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner (gp2 on AWS, standard on
## GKE, Azure & OpenStack)
##
# storageClass: "-"
# annotations: {}
accessMode: ReadWriteOnce
size: 10Gi
```

Use one of following two strategies to prepare existing collection for enabling Fluentd persistence:

- ### Enabling Fluentd persistence by recreating Fluentd StatefulSet

In a heavy used clusters with high load of logs and metrics it might be possible that
recreating Fluentd StatefulSet with new `volumeClaimTemplate` may cause logs and metrics
being unavailable for the time of recreation. It usually shouldn't take more than several seconds.

To recreate Fluentd StatefulSets with new `volumeClaimTemplate` one can run
the following commands for all Fluentd StatefulSets.

Remember to adjust `volumeClaimTemplate` (`VOLUME_CLAIM_TEMPLATE` variable in command below)
which will be added to `volumeClaimTemplates` in StatefulSet `spec` according to your needs,
for details please check `PersistentVolumeClaim` in Kubernetes API specification.

Also remember to replace the `NAMESPACE` and `RELEASE_NAME` variables with proper values.

```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
VOLUME_CLAIM_TEMPLATE=$(cat <<-"EOF"
metadata:
name: buffer
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
EOF
) && \
BUFFER_VOLUME=$(cat <<-"EOF"
mountPath: /fluentd/buffer
name: buffer
EOF
)&& \
kubectl --namespace ${NAMESPACE} get statefulset ${RELEASE_NAME}-sumologic-fluentd-logs --output yaml | \
yq w - "spec.volumeClaimTemplates[+]" --from <(echo "${VOLUME_CLAIM_TEMPLATE}") | \
yq w - "spec.template.spec.containers[0].volumeMounts[+]" --from <(echo "${BUFFER_VOLUME}") | \
kubectl apply --namespace ${NAMESPACE} --force --filename -
```
```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
VOLUME_CLAIM_TEMPLATE=$(cat <<-"EOF"
metadata:
name: buffer
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
EOF
) && \
BUFFER_VOLUME=$(cat <<-"EOF"
mountPath: /fluentd/buffer
name: buffer
EOF
)&& \
kubectl --namespace ${NAMESPACE} get statefulset ${RELEASE_NAME}-sumologic-fluentd-metrics --output yaml | \
yq w - "spec.volumeClaimTemplates[+]" --from <(echo "${VOLUME_CLAIM_TEMPLATE}") | \
yq w - "spec.template.spec.containers[0].volumeMounts[+]" --from <(echo "${BUFFER_VOLUME}") | \
kubectl apply --namespace ${NAMESPACE} --force --filename -
```
```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
VOLUME_CLAIM_TEMPLATE=$(cat <<-"EOF"
metadata:
name: buffer
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
EOF
) && \
BUFFER_VOLUME=$(cat <<-"EOF"
mountPath: /fluentd/buffer
name: buffer
EOF
)&& \
kubectl --namespace ${NAMESPACE} get statefulset ${RELEASE_NAME}-sumologic-fluentd-events --output yaml | \
yq w - "spec.volumeClaimTemplates[+]" --from <(echo "${VOLUME_CLAIM_TEMPLATE}") | \
yq w - "spec.template.spec.containers[0].volumeMounts[+]" --from <(echo "${BUFFER_VOLUME}") | \
kubectl apply --namespace ${NAMESPACE} --force --filename -
```
**Notice** When StatefulSets managed by helm are modified by commands specified above,
one might expect a warning similar to this one:
`Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply`
Upgrade collection with Fluentd persistence enabled, e.g.
```bash
helm upgrade <RELEASE-NAME> sumologic/sumologic --version=<VERSION> -f <VALUES>
```
- ### Enabling Fluentd persistence by preparing temporary instances of Fluentd and removing earlier created
To create a temporary instances of Fluentd StatefulSets and avoid a loss of logs or metrics one can run the following commands.
Remember to replace the `NAMESPACE` and `RELEASE_NAME` variables with proper values.
```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
kubectl get statefulset --namespace ${NAMESPACE} ${RELEASE_NAME}-sumologic-fluentd-logs --output yaml | \
yq w - "metadata.name" tmp-${RELEASE_NAME}-sumologic-fluentd-logs | \
yq w - "metadata.labels[heritage]" "tmp" | \
yq w - "spec.template.metadata.labels[heritage]" "tmp" | \
yq w - "spec.selector.matchLabels[heritage]" "tmp" | \
kubectl create --filename -
```
```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
kubectl get statefulset --namespace ${NAMESPACE} ${RELEASE_NAME}-sumologic-fluentd-metrics --output yaml | \
yq w - "metadata.name" tmp-${RELEASE_NAME}-sumologic-fluentd-metrics | \
yq w - "metadata.labels[heritage]" "tmp" | \
yq w - "spec.template.metadata.labels[heritage]" "tmp" | \
yq w - "spec.selector.matchLabels[heritage]" "tmp" | \
kubectl create --filename -
```
```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
kubectl get statefulset --namespace ${NAMESPACE} ${RELEASE_NAME}-sumologic-fluentd-events --output yaml | \
yq w - "metadata.name" tmp-${RELEASE_NAME}-sumologic-fluentd-events | \
yq w - "metadata.labels[heritage]" "tmp" | \
yq w - "spec.template.metadata.labels[heritage]" "tmp" | \
yq w - "spec.selector.matchLabels[heritage]" "tmp" | \
kubectl create --filename -
```
Delete old instances of Fluentd StatefulSets:
```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
kubectl wait --for=condition=ready pod \
--namespace ${NAMESPACE} \
--selector "release==${RELEASE_NAME},heritage=tmp" && \
kubectl delete statefulset --namespace ${NAMESPACE} ${RELEASE_NAME}-sumologic-fluentd-events && \
kubectl delete statefulset --namespace ${NAMESPACE} ${RELEASE_NAME}-sumologic-fluentd-logs && \
kubectl delete statefulset --namespace ${NAMESPACE} ${RELEASE_NAME}-sumologic-fluentd-metrics
```
Upgrade collection with Fluentd persistence enabled, e.g.
```bash
helm upgrade <RELEASE-NAME> sumologic/sumologic --version=<VERSION> -f <VALUES>
```
**Notice:** After the Helm chart upgrade is done, in order to remove temporary Fluentd
StatefulSets run the following command:
```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
kubectl wait --for=condition=ready pod \
--namespace ${NAMESPACE} \
--selector "release==${RELEASE_NAME},heritage=Helm" && \
kubectl delete statefulset \
--namespace ${NAMESPACE} \
--selector "release==${RELEASE_NAME},heritage=tmp"
```
## Disabling Fluentd persistence
To disable Fluentd persistence in existing collection modify `values.yaml` file under the `fluentd`
key as follows:
```yaml
fluentd:
persistence:
enabled: false
```
Use one of following two strategies to prepare existing collection for disabling Fluentd persistence:
- ### Disabling Fluentd persistence by recreating Fluentd StatefulSet
In a heavy used clusters with high load of logs and metrics it might be possible that
recreating Fluentd StatefulSet without `volumeClaimTemplate` may cause logs and metrics
being unavailable for the time of recreation. It usually shouldn't take more than several seconds.
To recreate Fluentd StatefulSets without `volumeClaimTemplate` one can run
the following commands for all Fluentd StatefulSets.
Remember to replace the `NAMESPACE` and `RELEASE_NAME` variables with proper values.
```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
kubectl --namespace ${NAMESPACE} get statefulset ${RELEASE_NAME}-sumologic-fluentd-logs --output yaml | \
yq d - "spec.template.spec.containers[*].volumeMounts(name==buffer)" | \
yq d - "spec.volumeClaimTemplates(metadata.name==buffer)" | \
kubectl apply --namespace ${NAMESPACE} --force --filename -
```
```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
kubectl --namespace ${NAMESPACE} get statefulset ${RELEASE_NAME}-sumologic-fluentd-metrics --output yaml | \
yq d - "spec.template.spec.containers[*].volumeMounts(name==buffer)" | \
yq d - "spec.volumeClaimTemplates(metadata.name==buffer)" | \
kubectl apply --namespace ${NAMESPACE} --force --filename -
```
```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
kubectl --namespace ${NAMESPACE} get statefulset ${RELEASE_NAME}-sumologic-fluentd-events --output yaml | \
yq d - "spec.template.spec.containers[*].volumeMounts(name==buffer)" | \
yq d - "spec.volumeClaimTemplates(metadata.name==buffer)" | \
kubectl apply --namespace ${NAMESPACE} --force --filename -
```
**Notice** When StatefulSets managed by helm are modified by commands specified above,
one might expect a warning similar to this one:
`Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply`
Upgrade collection with Fluentd persistence disabled, e.g.
```bash
helm upgrade <RELEASE-NAME> sumologic/sumologic --version=<VERSION> -f <VALUES>
```
**Notice:** After the Helm chart upgrade is done, it is needed to remove remaining `PersistentVolumeClaims`
which are no longer used by Fluend Statefulsets.
To remove remaining `PersistentVolumeClaims`:
```bash
kubectl delete pvc --namespace ${NAMESPACE} --selector app=${RELEASE_NAME}-sumologic-fluentd-logs
kubectl delete pvc --namespace ${NAMESPACE} --selector app=${RELEASE_NAME}-sumologic-fluentd-metrics
kubectl delete pvc --namespace ${NAMESPACE} --selector app=${RELEASE_NAME}-sumologic-fluentd-events
```
- ### Disabling Fluentd persistence by preparing temporary instances of Fluentd and removing earlier created
To create a temporary instances of Fluentd StatefulSets and avoid a loss of logs or metrics one can run the following commands.
Remember to replace the `NAMESPACE` and `RELEASE_NAME` variables with proper values.
```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
kubectl get statefulset --namespace ${NAMESPACE} ${RELEASE_NAME}-sumologic-fluentd-logs --output yaml | \
yq w - "metadata.name" tmp-${RELEASE_NAME}-sumologic-fluentd-logs | \
yq w - "metadata.labels[heritage]" "tmp" | \
yq w - "spec.template.metadata.labels[heritage]" "tmp" | \
yq w - "spec.selector.matchLabels[heritage]" "tmp" | \
kubectl create --filename -
```
```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
kubectl get statefulset --namespace ${NAMESPACE} ${RELEASE_NAME}-sumologic-fluentd-metrics --output yaml | \
yq w - "metadata.name" tmp-${RELEASE_NAME}-sumologic-fluentd-metrics | \
yq w - "metadata.labels[heritage]" "tmp" | \
yq w - "spec.template.metadata.labels[heritage]" "tmp" | \
yq w - "spec.selector.matchLabels[heritage]" "tmp" | \
kubectl create --filename -
```
```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
kubectl get statefulset --namespace ${NAMESPACE} ${RELEASE_NAME}-sumologic-fluentd-events --output yaml | \
yq w - "metadata.name" tmp-${RELEASE_NAME}-sumologic-fluentd-events | \
yq w - "metadata.labels[heritage]" "tmp" | \
yq w - "spec.template.metadata.labels[heritage]" "tmp" | \
yq w - "spec.selector.matchLabels[heritage]" "tmp" | \
kubectl create --filename -
```
Delete old instances of Fluentd StatefulSets:
```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
kubectl wait --for=condition=ready pod \
--namespace ${NAMESPACE} \
--selector "release==${RELEASE_NAME},heritage=tmp" && \
kubectl delete statefulset --namespace ${NAMESPACE} ${RELEASE_NAME}-sumologic-fluentd-events && \
kubectl delete statefulset --namespace ${NAMESPACE} ${RELEASE_NAME}-sumologic-fluentd-logs && \
kubectl delete statefulset --namespace ${NAMESPACE} ${RELEASE_NAME}-sumologic-fluentd-metrics
```
Upgrade collection with Fluentd persistence disabled, e.g.
```bash
helm upgrade <RELEASE-NAME> sumologic/sumologic --version=<VERSION> -f <VALUES>
```
**Notice:** After the Helm chart upgrade is done, it is needed to remove temporary Fluentd
StatefulSets and remaining `PersistentVolumeClaims` which are no longer used by Fluend Statefulsets.
To remove temporary Fluentd StatefulSets:
```bash
NAMESPACE=sumologic && \
RELEASE_NAME=collection && \
kubectl wait --for=condition=ready pod \
--namespace ${NAMESPACE} \
--selector "release==${RELEASE_NAME},heritage=Helm" && \
kubectl delete statefulset \
--namespace ${NAMESPACE} \
--selector "release==${RELEASE_NAME},heritage=tmp"
```
To remove remaining `PersistentVolumeClaims`:
```bash
kubectl delete pvc --namespace ${NAMESPACE} --selector app=${RELEASE_NAME}-sumologic-fluentd-logs
kubectl delete pvc --namespace ${NAMESPACE} --selector app=${RELEASE_NAME}-sumologic-fluentd-metrics
kubectl delete pvc --namespace ${NAMESPACE} --selector app=${RELEASE_NAME}-sumologic-fluentd-events
```
2 changes: 1 addition & 1 deletion deploy/helm/sumologic/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ Parameter | Description | Default
`fluentd.podLabels` | Additional labels for all fluentd pods | `{}`
`fluentd.podAnnotations` | Additional annotations for all fluentd pods | `{}`
`fluentd.podSecurityPolicy.create` | If true, create & use `podSecurityPolicy` for fluentd resources | `false`
`fluentd.persistence.enabled` | Persist data to a persistent volume; When enabled, fluentd uses the file buffer instead of memory buffer. After setting the value to true, run the helm upgrade command with the --force flag. | `false`
`fluentd.persistence.enabled` | Persist data to a persistent volume; When enabled, fluentd uses the file buffer instead of memory buffer. After changing this value follow steps described in [Fluentd Persistence](FluentdPersistence.md). | `false`
`fluentd.persistence.storageClass` | If defined, storageClassName: <storageClass>. If set to "-", storageClassName: "", which disables dynamic provisioning. If undefined (the default) or set to null, no storageClassName spec is set, choosing the default provisioner. (gp2 on AWS, standard on GKE, Azure & OpenStack) | `Nil`
`fluentd.persistence.annotations` | Annotations for the persistence. | `Nil`
`fluentd.persistence.accessMode` | The accessMode for persistence. | `ReadWriteOnce`
Expand Down
Loading

0 comments on commit a0d98c0

Please sign in to comment.