Skip to content

Commit

Permalink
Update information about changing Fluentd persistence
Browse files Browse the repository at this point in the history
  • Loading branch information
kkujawa-sumo committed Jan 11, 2021
1 parent e6f56c9 commit aae5cfd
Show file tree
Hide file tree
Showing 5 changed files with 327 additions and 4 deletions.
5 changes: 4 additions & 1 deletion deploy/docs/Best_Practices.md
Original file line number Diff line number Diff line change
Expand Up @@ -146,10 +146,13 @@ key as follows:
fluentd:
## Persist data to a persistent volume; When enabled, fluentd uses the file buffer instead of memory buffer.
persistence:
## After setting the value to true, run the helm upgrade command with the --force flag.
## After changing this value please follow steps described in:
## https://github.com/SumoLogic/sumologic-kubernetes-collection/tree/main/deploy/docs/FluentdPersistence.md
enabled: true
```

After changing Fluentd persistence setting (enable or disable) follow steps described in [Fluentd Persistence](FluentdPersistence.md).

Additional buffering and flushing parameters can be added in the `extraConf`,
in the `fluentd` buffer section.

Expand Down
292 changes: 292 additions & 0 deletions deploy/docs/FluentdPersistence.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,292 @@
# Fluentd persistence

Starting with `v2.0.0` we're using file-based buffer for Fluentd instead of less
reliable in-memory buffer.

The buffer configuration can be set in the `values.yaml` file under the `fluentd`
key as follows:

```yaml
fluentd:
persistence:
enabled: true
```

When setting for Fluentd persistence is changed (enable or disable) it is required to recreate or delete existing Fluentd StatefulSet,
as it is not possible to change it and add/remove `volumeClaimTemplate`.

**Note:** The below commands are using `yq` in version `3.4.0` <= `x` < `4.0.0`.

## Enabling Fluentd persistence

To enable Fluentd persistence in existing collection modify `values.yaml` file under the `fluentd`
key as follows:

```yaml
fluentd:
persistence:
enabled: true
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, Azure & OpenStack)
##
# storageClass: "-"
# annotations: {}
accessMode: ReadWriteOnce
size: 10Gi
```

Use one of following two strategies to prepare existing collection for enabling Fluentd persistence:

- ### Enabling Fluentd persistence by recreating Fluentd StatefulSet

Recreating Fluentd StatefulSet with new `volumeClaimTemplate` may cause that logs and metrics
will not be available in the time of recreation. It usually shouldn't take more than several seconds.

To recreate Fluentd StatefulSets with new `volumeClaimTemplate` one can run
the following commands for all Fluentd StatefulSets:

```bash
VOLUME_CLAIM_TEMPLATE=$(cat <<-"EOF"
metadata:
name: buffer
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
EOF
) && \
BUFFER_VOLUME=$(cat <<-"EOF"
mountPath: /fluentd/buffer
name: buffer
EOF
)&& \
kubectl --namespace <NAMESPACE-NAME> get statefulset <RELEASE-NAME>-sumologic-fluentd-logs --output yaml | \
yq w - "spec.volumeClaimTemplates[+]" --from <(echo "${VOLUME_CLAIM_TEMPLATE}") | \
yq w - "spec.template.spec.containers[0].volumeMounts[+]" --from <(echo "${BUFFER_VOLUME}") | \
kubectl apply --namespace <NAMESPACE-NAME> --force --filename -
```
```bash
VOLUME_CLAIM_TEMPLATE=$(cat <<-"EOF"
metadata:
name: buffer
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
EOF
) && \
BUFFER_VOLUME=$(cat <<-"EOF"
mountPath: /fluentd/buffer
name: buffer
EOF
)&& \
kubectl --namespace <NAMESPACE-NAME> get statefulset <RELEASE-NAME>-sumologic-fluentd-metrics --output yaml | \
yq w - "spec.volumeClaimTemplates[+]" --from <(echo "${VOLUME_CLAIM_TEMPLATE}") | \
yq w - "spec.template.spec.containers[0].volumeMounts[+]" --from <(echo "${BUFFER_VOLUME}") | \
kubectl apply --namespace <NAMESPACE-NAME> --force --filename -
```
```bash
VOLUME_CLAIM_TEMPLATE=$(cat <<-"EOF"
metadata:
name: buffer
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
EOF
) && \
BUFFER_VOLUME=$(cat <<-"EOF"
mountPath: /fluentd/buffer
name: buffer
EOF
)&& \
kubectl --namespace <NAMESPACE-NAME> get statefulset <RELEASE-NAME>-sumologic-fluentd-events --output yaml | \
yq w - "spec.volumeClaimTemplates[+]" --from <(echo "${VOLUME_CLAIM_TEMPLATE}") | \
yq w - "spec.template.spec.containers[0].volumeMounts[+]" --from <(echo "${BUFFER_VOLUME}") | \
kubectl apply --namespace <NAMESPACE-NAME> --force --filename -
```
Remember to adjust `volumeClaimTemplate` (`VOLUME_CLAIM_TEMPLATE` variable in command above)
which will be added to `volumeClaimTemplates` in StatefulSet `spec` according to your needs,
for details please check `PersistentVolumeClaim` in Kubernetes API specification.
**Notice** When StatefulSets managed by helm are modified by commands specified above,
one might expect a warning similar to the one below:
`Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply`
- ### Enabling Fluentd persistence by preparing temporary instances of Fluentd and removing earlier created
To create temporary instances of Fluentd StatefulSets and avoid of break in logs and metrics:
```bash
kubectl get statefulset --namespace <NAMESPACE-NAME> <RELEASE-NAME>-sumologic-fluentd-logs --output yaml | \
yq w - "metadata.name" tmp-<RELEASE-NAME>-sumologic-fluentd-logs | \
yq w - "metadata.labels[heritage]" "tmp" | \
yq w - "spec.template.metadata.labels[heritage]" "tmp" | \
yq w - "spec.selector.matchLabels[heritage]" "tmp" | \
kubectl create --filename -
```
```bash
kubectl get statefulset --namespace <NAMESPACE-NAME> <RELEASE-NAME>-sumologic-fluentd-metrics --output yaml | \
yq w - "metadata.name" tmp-<RELEASE-NAME>-sumologic-fluentd-metrics | \
yq w - "metadata.labels[heritage]" "tmp" | \
yq w - "spec.template.metadata.labels[heritage]" "tmp" | \
yq w - "spec.selector.matchLabels[heritage]" "tmp" | \
kubectl create --filename -
```
```bash
kubectl get statefulset --namespace <NAMESPACE-NAME> <RELEASE-NAME>-sumologic-fluentd-events --output yaml | \
yq w - "metadata.name" tmp-<RELEASE-NAME>-sumologic-fluentd-events | \
yq w - "metadata.labels[heritage]" "tmp" | \
yq w - "spec.template.metadata.labels[heritage]" "tmp" | \
yq w - "spec.selector.matchLabels[heritage]" "tmp" | \
kubectl create --filename -
```
Delete old instances of Fluentd StatefulSets:
```bash
kubectl wait --for=condition=ready pod \
--namespace <NAMESPACE-NAME> \
--selector "release==<RELEASE-NAME>,heritage=tmp" && \
kubectl delete statefulset --namespace <NAMESPACE-NAME> <RELEASE-NAME>-sumologic-fluentd-events && \
kubectl delete statefulset --namespace <NAMESPACE-NAME> <RELEASE-NAME>-sumologic-fluentd-logs && \
kubectl delete statefulset --namespace <NAMESPACE-NAME> <RELEASE-NAME>-sumologic-fluentd-metrics
```
**Notice:** After collection upgrade is done, in order to remove the temporary Fluentd
StatefulSets run the following command:
```bash
kubectl wait --for=condition=ready pod \
--namespace <NAMESPACE-NAME> \
--selector "release==<RELEASE-NAME>,heritage=Helm" && \
kubectl delete statefulset \
--namespace <NAMESPACE-NAME> \
--selector "release==<RELEASE-NAME>,heritage=tmp"
```
Upgrade collection with Fluentd persistence enabled, e.g.
```bash
helm upgrade <RELEASE-NAME> sumologic/sumologic --version=<VERSION> -f <VALUES>
```
## Disabling Fluentd persistence
To disable Fluentd persistence in existing collection modify `values.yaml` file under the `fluentd`
key as follows:
```yaml
fluentd:
persistence:
enabled: false
```
Use one of following two strategies to prepare existing collection for disabling Fluentd persistence:
- ### Disabling Fluentd persistence by recreating Fluentd StatefulSet
Recreating Fluentd StatefulSet without `volumeClaimTemplate` may cause that logs and metrics
will not be available in the time of recreation. It usually shouldn't take more than several seconds.
To recreate Fluentd StatefulSets without `volumeClaimTemplate` one can run
the following commands for all Fluentd StatefulSets:
```bash
kubectl --namespace <NAMESPACE-NAME> get statefulset <RELEASE-NAME>-sumologic-fluentd-logs --output yaml | \
yq d - "spec.template.spec.containers[*].volumeMounts(name==buffer)" | \
yq d - "spec.volumeClaimTemplates(metadata.name==buffer)" | \
kubectl apply --namespace <NAMESPACE-NAME> --force --filename -
```
```bash
kubectl --namespace <NAMESPACE-NAME> get statefulset <RELEASE-NAME>-sumologic-fluentd-metrics --output yaml | \
yq d - "spec.template.spec.containers[*].volumeMounts(name==buffer)" | \
yq d - "spec.volumeClaimTemplates(metadata.name==buffer)" | \
kubectl apply --namespace <NAMESPACE-NAME> --force --filename -
```
```bash
kubectl --namespace <NAMESPACE-NAME> get statefulset <RELEASE-NAME>-sumologic-fluentd-events --output yaml | \
yq d - "spec.template.spec.containers[*].volumeMounts(name==buffer)" | \
yq d - "spec.volumeClaimTemplates(metadata.name==buffer)" | \
kubectl apply --namespace <NAMESPACE-NAME> --force --filename -
```
**Notice** When StatefulSets managed by helm are modified by commands specified above,
one might expect a warning similar to the one below:
`Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply`
- ### Disabling Fluentd persistence by preparing temporary instances of Fluentd and removing earlier created
To create temporary instances of Fluentd StatefulSets and avoid of break in logs and metrics:
```bash
kubectl get statefulset --namespace <NAMESPACE-NAME> <RELEASE-NAME>-sumologic-fluentd-logs --output yaml | \
yq w - "metadata.name" tmp-<RELEASE-NAME>-sumologic-fluentd-logs | \
yq w - "metadata.labels[heritage]" "tmp" | \
yq w - "spec.template.metadata.labels[heritage]" "tmp" | \
yq w - "spec.selector.matchLabels[heritage]" "tmp" | \
kubectl create --filename -
```
```bash
kubectl get statefulset --namespace <NAMESPACE-NAME> <RELEASE-NAME>-sumologic-fluentd-metrics --output yaml | \
yq w - "metadata.name" tmp-<RELEASE-NAME>-sumologic-fluentd-metrics | \
yq w - "metadata.labels[heritage]" "tmp" | \
yq w - "spec.template.metadata.labels[heritage]" "tmp" | \
yq w - "spec.selector.matchLabels[heritage]" "tmp" | \
kubectl create --filename -
```
```bash
kubectl get statefulset --namespace <NAMESPACE-NAME> <RELEASE-NAME>-sumologic-fluentd-events --output yaml | \
yq w - "metadata.name" tmp-<RELEASE-NAME>-sumologic-fluentd-events | \
yq w - "metadata.labels[heritage]" "tmp" | \
yq w - "spec.template.metadata.labels[heritage]" "tmp" | \
yq w - "spec.selector.matchLabels[heritage]" "tmp" | \
kubectl create --filename -
```
Delete old instances of Fluentd StatefulSets:
```bash
kubectl wait --for=condition=ready pod \
--namespace <NAMESPACE-NAME> \
--selector "release==<RELEASE-NAME>,heritage=tmp" && \
kubectl delete statefulset --namespace <NAMESPACE-NAME> <RELEASE-NAME>-sumologic-fluentd-events && \
kubectl delete statefulset --namespace <NAMESPACE-NAME> <RELEASE-NAME>-sumologic-fluentd-logs && \
kubectl delete statefulset --namespace <NAMESPACE-NAME> <RELEASE-NAME>-sumologic-fluentd-metrics
```
**Notice:** After collection upgrade is done, in order to remove the temporary Fluentd
StatefulSets run the following command:
```bash
kubectl wait --for=condition=ready pod \
--namespace <NAMESPACE-NAME> \
--selector "release==<RELEASE-NAME>,heritage=Helm" && \
kubectl delete statefulset \
--namespace <NAMESPACE-NAME> \
--selector "release==<RELEASE-NAME>,heritage=tmp"
```
Upgrade collection with Fluentd persistence disabled, e.g.
```bash
helm upgrade <RELEASE-NAME> sumologic/sumologic --version=<VERSION> -f <VALUES>
```
29 changes: 28 additions & 1 deletion deploy/docs/v2_migration_doc.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,8 @@ as well as the exact steps for migration.
app.kubernetes.io/instance: <RELEASE-NAME>
```

- Persistence for Fluentd is enabled by default.

## How to upgrade

### Requirements
Expand Down Expand Up @@ -260,7 +262,32 @@ One of the following two strategies can be used:
--selector "app=fluent-bit,release=<RELEASE-NAME>,heritage=tmp"
```
### 4. Run upgrade script
### 4. Configure Fluentd persistence
Starting with `v2.0.0` we're using file-based buffer for Fluentd instead of less
reliable in-memory buffer (`fluentd.persistence.enabled=true`).
When Fluentd persistence is enabled in collection to upgrade no action is required.
When Fluentd persistence is disabled (default setting in `1.3.5` release)
it is required either go through persistence enabling procedure before upgrade (recommended)
or preserve existing setting and modify default setting for Fluentd persistence in `2.0.0` release.
**In order to enable persistence in existing collection** please follow one of persistence enabling procedures described in
[Enabling Fluentd Persistence](FluentdPersistence.md#enabling-fluentd-persistence) guide before upgrade.
If Fluentd persistence is disabled in existing collection and it is desired to preserve this setting,
modify defaults and disable persistence either by adding `--set fluentd.persistence.enabled=false` to `helm upgrade` command or
in the `values.yaml` file under the `fluentd`
key as follows:
```yaml
fluentd:
persistence:
enabled: false
```
### 5. Run upgrade script
For Helm users, the only breaking changes are the renamed config parameters.
For users who use a `values.yaml` file, we provide a script that users can run
Expand Down
2 changes: 1 addition & 1 deletion deploy/helm/sumologic/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ Parameter | Description | Default
`fluentd.podLabels` | Additional labels for all fluentd pods | `{}`
`fluentd.podAnnotations` | Additional annotations for all fluentd pods | `{}`
`fluentd.podSecurityPolicy.create` | If true, create & use `podSecurityPolicy` for fluentd resources | `false`
`fluentd.persistence.enabled` | Persist data to a persistent volume; When enabled, fluentd uses the file buffer instead of memory buffer. After setting the value to true, run the helm upgrade command with the --force flag. | `false`
`fluentd.persistence.enabled` | Persist data to a persistent volume; When enabled, fluentd uses the file buffer instead of memory buffer. After changing this value follow steps described in [Fluentd Persistence](FluentdPersistence.md).| `true`
`fluentd.persistence.storageClass` | If defined, storageClassName: <storageClass>. If set to "-", storageClassName: "", which disables dynamic provisioning. If undefined (the default) or set to null, no storageClassName spec is set, choosing the default provisioner. (gp2 on AWS, standard on GKE, Azure & OpenStack) | `Nil`
`fluentd.persistence.annotations` | Annotations for the persistence. | `Nil`
`fluentd.persistence.accessMode` | The accessMode for persistence. | `ReadWriteOnce`
Expand Down
3 changes: 2 additions & 1 deletion deploy/helm/sumologic/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -233,7 +233,8 @@ fluentd:

## Persist data to a persistent volume; When enabled, fluentd uses the file buffer instead of memory buffer.
persistence:
## After setting the value to true, run the helm upgrade command with the --force flag.
## After changing this value please follow steps described in:
## https://github.com/SumoLogic/sumologic-kubernetes-collection/tree/main/deploy/docs/FluentdPersistence.md
enabled: true

## If defined, storageClassName: <storageClass>
Expand Down

0 comments on commit aae5cfd

Please sign in to comment.