From 4a060b63ff1a5ad1ee85e192be367877d5a185b9 Mon Sep 17 00:00:00 2001 From: Arianna Laudazzi <46651782+alaudazzi@users.noreply.github.com> Date: Mon, 23 May 2022 07:13:01 +0200 Subject: [PATCH] [a11y] Fix "above" occurrences (#5672) * Fix above occurrences * edits * edits * Update docs/orchestrating-elastic-stack-applications/elasticsearch/remote-clusters.asciidoc Co-authored-by: Thibault Richard * Update docs/advanced-topics/network-policies.asciidoc Co-authored-by: Thibault Richard Co-authored-by: Thibault Richard --- docs/advanced-topics/network-policies.asciidoc | 2 +- docs/advanced-topics/openshift.asciidoc | 2 +- .../0002-global-operator/0002-global-operator.md | 2 +- docs/design/0003-associations/0003-associations.md | 2 +- docs/design/0007-local-volume-total-capacity.md | 4 ++-- docs/design/0009-pod-reuse-es-restart.md | 14 +++++++------- docs/design/0010-license-checks.md | 4 ++-- docs/design/0011-process-manager.md | 4 ++-- docs/operating-eck/installing-eck.asciidoc | 2 +- docs/operating-eck/operator-config.asciidoc | 4 ++-- .../restrict-cross-namespace-associations.asciidoc | 2 +- .../troubleshooting/common-problems.asciidoc | 2 +- .../troubleshooting-methods.asciidoc | 2 +- docs/operating-eck/webhook.asciidoc | 8 ++++---- .../elasticsearch/orchestration.asciidoc | 2 +- .../elasticsearch/remote-clusters.asciidoc | 6 +++--- .../elasticsearch/snapshots.asciidoc | 2 +- .../elasticsearch/update-strategy.asciidoc | 2 +- .../managing-compute-resources.asciidoc | 2 +- .../security/rotate-credentials.asciidoc | 4 ++-- docs/release-notes/highlights-1.9.0.asciidoc | 2 +- docs/release-notes/highlights-1.9.1.asciidoc | 2 +- 22 files changed, 38 insertions(+), 38 deletions(-) diff --git a/docs/advanced-topics/network-policies.asciidoc b/docs/advanced-topics/network-policies.asciidoc index a544887aa8..7fd2e7a27a 100644 --- a/docs/advanced-topics/network-policies.asciidoc +++ b/docs/advanced-topics/network-policies.asciidoc @@ -55,7 +55,7 @@ The minimal set of permissions required are as follows: |=== -Assuming that the Kubernetes API server IP address is `10.0.0.1`, the following network policy implements the rules above. +Assuming that the Kubernetes API server IP address is `10.0.0.1`, the following network policy implements this minimal set of permissions. NOTE: Run `kubectl cluster-info | grep master` to obtain the API server IP address for your cluster. diff --git a/docs/advanced-topics/openshift.asciidoc b/docs/advanced-topics/openshift.asciidoc index 120d45d7e0..cc0a87dc1c 100644 --- a/docs/advanced-topics/openshift.asciidoc +++ b/docs/advanced-topics/openshift.asciidoc @@ -64,7 +64,7 @@ oc new-project elastic # creates the elastic project oc adm policy add-role-to-user elastic-operator developer -n elastic ---- + -In the example above the user `developer` is allowed to manage Elastic resources in the namespace `elastic`. +In this example the user `developer` is allowed to manage Elastic resources in the namespace `elastic`. [id="{p}-openshift-deploy-elasticsearch"] == Deploy an Elasticsearch instance with a route diff --git a/docs/design/0002-global-operator/0002-global-operator.md b/docs/design/0002-global-operator/0002-global-operator.md index 54f19fb811..1166359b81 100644 --- a/docs/design/0002-global-operator/0002-global-operator.md +++ b/docs/design/0002-global-operator/0002-global-operator.md @@ -54,7 +54,7 @@ Examples of the "placeholder" controllers (additional controllers) that would be ### Hybrid approach -Allow for a hybrid approach where it is possible to enable the components of both operators (global and namespaced) in a single operator in order to simplify small-scale deployments, or vice-versa, where the global operator takes on all responsibilities of the namespaced operator in addition to the installation wide ones. This addresses the main concern above with the drawback that it might not be identical to a production-style deployment. +Allow for a hybrid approach where it is possible to enable the components of both operators (global and namespaced) in a single operator in order to simplify small-scale deployments, or vice-versa, where the global operator takes on all responsibilities of the namespaced operator in addition to the installation wide ones. This addresses the main concern with the drawback that it might not be identical to a production-style deployment. ## Decision Outcome diff --git a/docs/design/0003-associations/0003-associations.md b/docs/design/0003-associations/0003-associations.md index 17a6b000b7..98c1fce026 100644 --- a/docs/design/0003-associations/0003-associations.md +++ b/docs/design/0003-associations/0003-associations.md @@ -233,7 +233,7 @@ spec: snapshot-provider: my-provider ``` - Using the above provider we can stamp out instances of the following resource for each matched cluster, and we can potentially keep it up to date with changes to the provider config as well. + Using the `SnapshotRepositoryProvider` we can stamp out instances of the following resource for each matched cluster, and we can potentially keep it up to date with changes to the provider config as well. ```yaml apiVersion: elasticsearch.k8s.elastic.co/v1alpha1 diff --git a/docs/design/0007-local-volume-total-capacity.md b/docs/design/0007-local-volume-total-capacity.md index f14ba9abb8..fb58bb5f40 100644 --- a/docs/design/0007-local-volume-total-capacity.md +++ b/docs/design/0007-local-volume-total-capacity.md @@ -51,11 +51,11 @@ On startup, the node volume provisioner inspects the available disk space (for e #### PVC/PV binding -When a PVC with storage class `elastic-local` is created, kubernetes PersistentVolume controller will automatically bind the PersistentVolume created above to this PVC (or the PVC created by another node). Storage capacity is taken into consideration here: if the PV spec specifies a 10TB capacity, it will not be bound to a PVC claiming 20TB. However, a 1GB PVC can still be bound to our 10TB PV. Effectively wasting our disk space here. +When a PVC with storage class `elastic-local` is created, the Kubernetes PersistentVolume controller will automatically bind the PersistentVolume created by the node volume provisioner to this PVC (or the PVC created by another node). Storage capacity is taken into consideration here: if the PV spec specifies a 10TB capacity, it will not be bound to a PVC claiming 20TB. However, a 1GB PVC can still be bound to our 10TB PV. Effectively wasting our disk space here. So how do we avoid wasting disk space in this scenario? As soon as the PV is bound to a PVC, our node volume provisioner gets notified (it's watching PVs it created). By retrieving both PV and matching PVC, it notices the PVC requests only 1GB out of the 10TB available. As a result, it updates the PersistentVolume spec to match those 1GB. The PV stays bound to the same PVC, even though its capacity was changed. The actual volume corresponding to this PV can then be created by the driver running on the node, as done in the current implementation. -We are left with 9.999TB available on the node: the node volume provisioner creates a new PersistentVolume with capacity 9.999TB, that can be bound to any PVC by the kubernetes PersistentVolume controller. If any PV gets deleted, the node volume provisioner reclaims the disk space freed by updating the PV capacity. For instance if the 1GB pod from the example above is deleted, the 9.999TB PV can be updated to 10TB. +We are left with 9.999TB available on the node: the node volume provisioner creates a new PersistentVolume with capacity 9.999TB, that can be bound to any PVC by the kubernetes PersistentVolume controller. If any PV gets deleted, the node volume provisioner reclaims the disk space freed by updating the PV capacity. For example, if the 1GB pod is deleted, the 9.999TB PV can be updated to 10TB. To summarize: diff --git a/docs/design/0009-pod-reuse-es-restart.md b/docs/design/0009-pod-reuse-es-restart.md index 40be6915a7..0ea00ea2d1 100644 --- a/docs/design/0009-pod-reuse-es-restart.md +++ b/docs/design/0009-pod-reuse-es-restart.md @@ -141,7 +141,7 @@ Note: this does not represent the _entire_ reconciliation loop, it focuses on th * Check if ES is stopped (`GET /es/status`), or re-queue * Annotate pod with `restart-phase: start` * If annotated with `stop-coordinated` - * Apply the same steps as above, but: + * Apply the same steps as for the `stop` annotation, but: * wait for all ES processes to be stopped instead of only the current pod one * annotate pod with `restart-phase: start-coordinated` * Handle pods in a `start` phase @@ -154,16 +154,16 @@ Note: this does not represent the _entire_ reconciliation loop, it focuses on th * Enable shards allocations * Remove the `restart-phase` annotation from the pod * If annotated with `start-coordinated`: - * Perform the same steps as above, but wait until *all* ES processes are started before enabling shards allocations + * Perform the same steps as for the `start` annotation, but wait until *all* ES processes are started before enabling shards allocations * Garbage-collect useless resources * Configuration secrets that do not match any pod and existed for more than for ex. 15min are safe to delete #### State machine specifics -It is important in the algorithm outlined above that: +In the reconciliation loop algorithm, it is important to note that: -* any step in a given phase is idempotent. For instance, it should be ok to run steps of the `stop` phase over and over again. -* transition to the next step is resilient to stale cache. If a pod is annotated with the `start` phase, it should be ok to perform all steps of the `stop` phase again (no-op). However the cache cannot go back in time: once we reach the `start` phase we must not perform the `stop` phase at the next iteration. Our apiserver and cache implementation consistency model guarantee this behaviour. +* Any step in a given phase is idempotent. For instance, it should be OK to run steps of the `stop` phase over and over again. +* Transition to the next step is resilient to stale cache. If a pod is annotated with the `start` phase, it should be OK to perform all steps of the `stop` phase again (no-op). However the cache cannot go back in time: once we reach the `start` phase we must not perform the `stop` phase at the next iteration. Our apiserver and cache implementation consistency model guarantee this behaviour. * the operator can restart at any point: on restart it should get back to the current phase. * a pod that should be reused will be reflected in the results of the comparison algorithm. However, once its configuration has been updated (but before it is actually restarted), it might not be reflected anymore. The comparison would then be based on the "new" configuration (not yet applied to the ES process), and the pod would require no change. That's OK: the ES process will still eventually be restarted with this correct new configuration, since annotated in the `start` phase. * if a pod is no longer requested for reuse (for ex. user changed their mind and reverted ES spec to the previous version) but is in the middle of a restart process, it will still go through that restart process. Depending on when the user reverted back the ES spec, compared to the pod current phase in the state machine: @@ -181,10 +181,10 @@ It is important in the algorithm outlined above that: #### Extensions to other use cases (TBD if worth implementing) -* We **don't** need rolling restarts in the context of TLS and license switch, but it seems easy to implement in the algorithm outlined above, to cover other use cases. +* We **don't** need rolling restarts in the context of TLS and license switch, but it seems easy to implement in the reconciliation loop algorithm, to cover other use cases. * We could catch any `restart-phase: schedule-rolling` set by the user on the Elasticsearch resource, and apply it to all pods of this cluster. This would allow the user to request a cluster restart himself. The user can also apply the annotation to the pods directly: this is the operator "restart API". * Applying the same annotations mechanism with something such as `stop: true` could allow us (or the user) to stop a particular node that misbehaves. -* Out of scope: it should be possible to adapt the algorithm above to replace a "pod reuse" by a "persistent volume reuse". Pods eligible for reuse at the end of the comparison are also eligible for persistent volume reuse. In such case, we'd need to stop the entire pod instead of stopping the ES process. The new pod would be created at the next reconciliation iteration, with a new config, but would reuse one of the available persistent volumes out there. The choice between pod reuse or PV reuse could be specified in the ES resource spec? +* Out of scope: it should be possible to adapt the reconciliation loop algorithm to replace a "pod reuse" by a "persistent volume reuse". Pods eligible for reuse at the end of the comparison are also eligible for persistent volume reuse. In such case, we'd need to stop the entire pod instead of stopping the ES process. The new pod would be created at the next reconciliation iteration, with a new config, but would reuse one of the available persistent volumes out there. The choice between pod reuse or PV reuse could be specified in the ES resource spec? ## Decision Outcome diff --git a/docs/design/0010-license-checks.md b/docs/design/0010-license-checks.md index 1fd197d0e9..adee34070a 100644 --- a/docs/design/0010-license-checks.md +++ b/docs/design/0010-license-checks.md @@ -47,7 +47,7 @@ The license controller MUST create controller licenses only when either a valid Enterprise license or a valid Enterprise trial license is present in the system. It CAN issue controller licenses with shorter lifetimes than the Enterprise license and auto-extend them as needed to limit the impact of accidental license leaks. But license leaks -are currently understood to be much less a concern than cluster licenses leaks as controller licenses have no validity +are currently understood to be much less a concern than cluster licenses leaks as controller licenses have no validity outside of the operator installation that has created them. @@ -86,7 +86,7 @@ secret would need to be deployed into the managed namespace not into the control plane namespace. Unless of course we run everything in one namespace anyway or we implement a custom client that has access to the control plane namespace of the namespace -operator (the latter is the underlying assumption for the graph above). +operator (the latter is the underlying assumption for the license controller graph). ### Positive Consequences diff --git a/docs/design/0011-process-manager.md b/docs/design/0011-process-manager.md index 4cb7f0a58c..d392561c8f 100644 --- a/docs/design/0011-process-manager.md +++ b/docs/design/0011-process-manager.md @@ -81,10 +81,10 @@ Where to run the keystore updater? How to perform cluster restart? * Destroy the pod and recreate it * -- depending on storage class we might not be able to recreate the pod where the volume resides. Only recovery at this point is manual restore from snapshot. - Considering volumes local to a node: during the interval between the pod being delete and a new pod being scheduled to reuse the same volume, there is no guarantee + Considering volumes local to a node: during the interval between the pod being deleted and a new pod being scheduled to reuse the same volume, there is no guarantee that no other pod will be scheduled on that node, taking up all resources available on the node, preventing the replacing pod to be scheduled. * Inject a process manager into the standard Elasticsearch container/image - * ++ would allow us to restart without recreating the pod, unless we need to change pod resources or environment variables, in which case the above applies + * ++ would allow us to restart without recreating the pod, unless we need to change pod resources or environment variables, in which case you have to destroy the pod and recreate it * ~ has the disadvantage of being fairly intrusive and complex (copying binaries through initcontainers, overriding command etc) * Use a liveness probe to make Kubernetes restart the container * -- hard to coordinate across an ES cluster diff --git a/docs/operating-eck/installing-eck.asciidoc b/docs/operating-eck/installing-eck.asciidoc index 6f6fa5125c..9eaa058ce6 100644 --- a/docs/operating-eck/installing-eck.asciidoc +++ b/docs/operating-eck/installing-eck.asciidoc @@ -81,7 +81,7 @@ helm install elastic-operator elastic/eck-operator -n elastic-system --create-na [NOTE] ==== -The `eck-operator` chart contains several pre-defined profiles to help you install the operator in different configurations. These profiles can be found in the root of the chart directory, prefixed with `profile-`. For example, the restricted configuration shown above is defined in the `profile-restricted.yaml` file, and can be used as follows: +The `eck-operator` chart contains several pre-defined profiles to help you install the operator in different configurations. These profiles can be found in the root of the chart directory, prefixed with `profile-`. For example, the restricted configuration illustrated in the previous code extract is defined in the `profile-restricted.yaml` file, and can be used as follows: [source,sh] ---- diff --git a/docs/operating-eck/operator-config.asciidoc b/docs/operating-eck/operator-config.asciidoc index fa73f0e67d..280be8c32f 100644 --- a/docs/operating-eck/operator-config.asciidoc +++ b/docs/operating-eck/operator-config.asciidoc @@ -36,7 +36,7 @@ ECK can be configured using either command line flags or environment variables. |metrics-port |0 |Prometheus metrics port. Set to 0 to disable the metrics endpoint. |namespaces |"" |Namespaces in which this operator should manage resources. Accepts multiple comma-separated values. Defaults to all namespaces if empty or unspecified. |operator-namespace |"" |Namespace the operator runs in. Required. -|set-default-security-context |true | Enables adding a default Pod Security Context to Elasticsearch Pods in Elasticsearch `8.0.0` and above. `fsGroup` is set to `1000` by default to match Elasticsearch container default UID. This behavior might not be appropriate for OpenShift and PSP-secured Kubernetes clusters, so it can be disabled. +|set-default-security-context |true | Enables adding a default Pod Security Context to Elasticsearch Pods in Elasticsearch `8.0.0` and later. `fsGroup` is set to `1000` by default to match Elasticsearch container default UID. This behavior might not be appropriate for OpenShift and PSP-secured Kubernetes clusters, so it can be disabled. |ubi-only | false | Use only UBI container images to deploy Elastic Stack applications. UBI images are only available from 7.10.0 onward. |validate-storage-class | true | Specifies whether the operator should retrieve storage classes to verify volume expansion support. Can be disabled if cluster-wide storage class RBAC access is not available. |webhook-cert-dir |"{TempDir}/k8s-webhook-server/serving-certs" |Path to the directory that contains the webhook server key and certificate. @@ -89,7 +89,7 @@ The operator can be started using any of the following methods to achieve the sa LOG_VERBOSITY=2 METRICS_PORT=6060 NAMESPACES="ns1,ns2,ns3" ./elastic-operator manager ---- -If you use a combination of all or some of the methods listed above, the descending order of precedence in case of a conflict is as follows: +If you use a combination of all or some of the these methods, the descending order of precedence in case of a conflict is as follows: - Flag - Environment variable diff --git a/docs/operating-eck/restrict-cross-namespace-associations.asciidoc b/docs/operating-eck/restrict-cross-namespace-associations.asciidoc index fda4314eb9..14f4baceb9 100644 --- a/docs/operating-eck/restrict-cross-namespace-associations.asciidoc +++ b/docs/operating-eck/restrict-cross-namespace-associations.asciidoc @@ -78,7 +78,7 @@ spec: serviceAccountName: associated-resource-sa ---- -In the above example, `associated-resource` can be of any `Kind` that requires an association to be created, for example `Kibana` or `ApmServer`. +In this example, `associated-resource` can be of any `Kind` that requires an association to be created, for example `Kibana` or `ApmServer`. You can find link:{eck_github}/blob/{eck_release_branch}/config/recipes/associations-rbac/apm_es_kibana_rbac.yaml[a complete example in the ECK GitHub repository]. NOTE: If the `serviceAccountName` is not set, ECK uses the default service account assigned to the pod by the link:https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#service-account-admission-controller[Service Account Admission Controller]. diff --git a/docs/operating-eck/troubleshooting/common-problems.asciidoc b/docs/operating-eck/troubleshooting/common-problems.asciidoc index 4611d5dee9..32f4b7d231 100644 --- a/docs/operating-eck/troubleshooting/common-problems.asciidoc +++ b/docs/operating-eck/troubleshooting/common-problems.asciidoc @@ -220,4 +220,4 @@ On OpenShift the same workaround can be performed in the UI by clicking on "Unin If you accidentally upgrade one of your Elasticsearch clusters to a version that does not exist or a version to which a direct upgrade is not possible from your currently deployed version, a validation will prevent you from going back to the previous version. The reason for this validation is that ECK will not allow downgrades as this is not supported by Elasticsearch and once the data directory of Elasticsearch has been upgraded there is no way back to the old version without a link:https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html[snapshot restore]. -The two scenarios described above however are exceptions because Elasticsearch never started up successfully. If you annotate the Elasticsearch resource with `eck.k8s.elastic.co/disable-downgrade-validation=true` ECK will allow you to go back to the old version at your own risk. Please remove the annotation afterwards to prevent accidental downgrades and reduced availability. +These two upgrading scenarios, however, are exceptions because Elasticsearch never started up successfully. If you annotate the Elasticsearch resource with `eck.k8s.elastic.co/disable-downgrade-validation=true` ECK allows you to go back to the old version at your own risk. Remove the annotation afterwards to prevent accidental downgrades and reduced availability. diff --git a/docs/operating-eck/troubleshooting/troubleshooting-methods.asciidoc b/docs/operating-eck/troubleshooting/troubleshooting-methods.asciidoc index 520c5c0350..1d3fcbabcb 100644 --- a/docs/operating-eck/troubleshooting/troubleshooting-methods.asciidoc +++ b/docs/operating-eck/troubleshooting/troubleshooting-methods.asciidoc @@ -20,7 +20,7 @@ Most common issues can be identified and resolved by following these instruction - <<{p}-suspend-elasticsearch>> - <<{p}-capture-jvm-heap-dumps>> -If you are still unable to find a solution to your problem after following the above instructions, ask for help: +If you are still unable to find a solution to your problem, ask for help: include::../../help.asciidoc[] diff --git a/docs/operating-eck/webhook.asciidoc b/docs/operating-eck/webhook.asciidoc index f65620bedf..3443175421 100644 --- a/docs/operating-eck/webhook.asciidoc +++ b/docs/operating-eck/webhook.asciidoc @@ -90,12 +90,12 @@ kubectl create secret -n elastic-system generic elastic-webhook-server-custom-ce - Install the operator with the following options: + * Set `manage-webhook-certs` to `false` -* Set `webhook-secret` to the name of the secret created above (`elastic-webhook-server-custom-cert`) +* Set `webhook-secret` to the name of the secret you have just created (`elastic-webhook-server-custom-cert`) [NOTE] ==== -If you are using the <<{p}-install-helm,Helm chart installation method>>, the above can be accomplished by the following command: +If you are using the <<{p}-install-helm,Helm chart installation method>>, you can install the operator by running this command: [source, sh] ---- @@ -157,7 +157,7 @@ webhooks: [NOTE] ==== -If you are using the <<{p}-install-helm,Helm chart installation method>>, the above can be accomplished by the following command: +If you are using the <<{p}-install-helm,Helm chart installation method>>, you can install the operator by running the following command: [source, sh] ---- @@ -203,7 +203,7 @@ If you get this error, try re-running the command with a higher request timeout kubectl --request-timeout=1m apply -f elasticsearch.yaml ---- -As the default link:https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#failure-policy[`failurePolicy`] of the webhook is `Ignore`, the above command should succeed after about 30 seconds. This is an indication that the API server cannot contact the webhook server and has foregone validation when creating the resource. +As the default link:https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#failure-policy[`failurePolicy`] of the webhook is `Ignore`, this command should succeed after about 30 seconds. This is an indication that the API server cannot contact the webhook server and has foregone validation when creating the resource. On link:https://cloud.google.com/kubernetes-engine/docs/concepts/private-cluster-concept[GKE private clusters], you may have to add a firewall rule allowing access to port 9443 from the API server so that it can contact the webhook. Check the link:https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules[GKE documentation on firewall rules] and the link:https://github.com/kubernetes/kubernetes/issues/79739[Kubernetes issue] for more details. diff --git a/docs/orchestrating-elastic-stack-applications/elasticsearch/orchestration.asciidoc b/docs/orchestrating-elastic-stack-applications/elasticsearch/orchestration.asciidoc index 9166920e3c..f1a117940d 100644 --- a/docs/orchestrating-elastic-stack-applications/elasticsearch/orchestration.asciidoc +++ b/docs/orchestrating-elastic-stack-applications/elasticsearch/orchestration.asciidoc @@ -157,7 +157,7 @@ If an Elasticsearch node holds the only copy of a shard, this shard becomes unav ** A cluster version upgrade is in progress and some Pods are not up to date. ** There are no initializing or relocating shards. -If the above conditions are met, then ECK can delete a Pod for upgrade even if the cluster health is yellow, as long as the Pod is not holding the last available replica of a shard. +If these conditions are met, then ECK can delete a Pod for upgrade even if the cluster health is yellow, as long as the Pod is not holding the last available replica of a shard. The health of the cluster is deliberately ignored in the following cases: diff --git a/docs/orchestrating-elastic-stack-applications/elasticsearch/remote-clusters.asciidoc b/docs/orchestrating-elastic-stack-applications/elasticsearch/remote-clusters.asciidoc index 7c95b86b19..a375abda47 100644 --- a/docs/orchestrating-elastic-stack-applications/elasticsearch/remote-clusters.asciidoc +++ b/docs/orchestrating-elastic-stack-applications/elasticsearch/remote-clusters.asciidoc @@ -71,7 +71,7 @@ kubectl get secret cluster-one-es-transport-certs-public \ -o go-template='{{index .data "ca.crt" | base64decode}}' > remote.ca.crt ---- -You then need to configure the CA as one of the trusted CAs in `cluster-two`. If that cluster is hosted outside of Kubernetes, simply add the CA certificate extracted in the above step to the list of CAs in link:https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html#_pem_encoded_files_3[`xpack.security.transport.ssl.certificate_authorities`]. +You then need to configure the CA as one of the trusted CAs in `cluster-two`. If that cluster is hosted outside of Kubernetes, take the CA certificate that you have just extracted and add it to the list of CAs in link:https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html#_pem_encoded_files_3[`xpack.security.transport.ssl.certificate_authorities`]. NOTE: Beware of copying the source Secret as-is into a different namespace. Check <<{p}-common-problems-owner-refs, Common Problems: Owner References>> for more information. @@ -115,7 +115,7 @@ spec: version: {version} ---- -. Repeat the above steps to add the CA of `cluster-two` to `cluster-one` as well. +. Repeat steps 1 and 2 to add the CA of `cluster-two` to `cluster-one` as well. === Configure the remote cluster connection through the Elasticsearch REST API @@ -154,4 +154,4 @@ PUT _cluster/settings } ---- <1> Use "proxy" mode as `cluster-two` will be connecting to `cluster-one` through the Kubernetes service abstraction. -<2> Replace `${LOADBALANCER_IP}` with the IP address assigned to the `LoadBalancer` configured above. If you have configured a DNS entry for the service, you can use the DNS name instead of the IP address as well. +<2> Replace `${LOADBALANCER_IP}` with the IP address assigned to the `LoadBalancer` configured in the previous code sample. If you have configured a DNS entry for the service, you can use the DNS name instead of the IP address as well. diff --git a/docs/orchestrating-elastic-stack-applications/elasticsearch/snapshots.asciidoc b/docs/orchestrating-elastic-stack-applications/elasticsearch/snapshots.asciidoc index 2c270c1541..8f7f7e3a04 100644 --- a/docs/orchestrating-elastic-stack-applications/elasticsearch/snapshots.asciidoc +++ b/docs/orchestrating-elastic-stack-applications/elasticsearch/snapshots.asciidoc @@ -118,7 +118,7 @@ spec: secureSettings: - secretName: gcs-credentials ---- -If you did not follow the instructions above and named your GCS credentials file differently, you can still map it to the expected name now. Check <<{p}-es-secure-settings,Secure Settings>> for details. +If you haven't followed these instructions and named your GCS credentials file differently, you can still map it to the expected name now. Check <<{p}-es-secure-settings,Secure Settings>> for details. . Apply the modifications: + [source,bash] diff --git a/docs/orchestrating-elastic-stack-applications/elasticsearch/update-strategy.asciidoc b/docs/orchestrating-elastic-stack-applications/elasticsearch/update-strategy.asciidoc index d109e97529..cbfdf0cbe2 100644 --- a/docs/orchestrating-elastic-stack-applications/elasticsearch/update-strategy.asciidoc +++ b/docs/orchestrating-elastic-stack-applications/elasticsearch/update-strategy.asciidoc @@ -58,4 +58,4 @@ spec: * Due to the safety measures employed by the operator, certain `changeBudget` might prevent the operator from making any progress . For example, with `maxSurge` set to 0, you cannot remove the last data node from one `nodeSet` and add a data node to a different `nodeSet`. In this case, the operator cannot create the new node because `maxSurge` is 0, and it cannot remove the old node because there are no other data nodes to migrate the data to. * For certain complex configurations, the operator might not be able to deduce the optimal order of operations necessary to achieve the desired outcome. If progress is blocked, you may need to update the `maxSurge` setting to a higher value than the theoretical best to help the operator make progress in that case. -If any of the above occurs, the operator generates logs to indicate that upscaling or downscaling are limited by `maxSurge` or `maxUnavailable` settings. +In these three cases, the operator generates logs to indicate that upscaling or downscaling are limited by `maxSurge` or `maxUnavailable` settings. diff --git a/docs/orchestrating-elastic-stack-applications/managing-compute-resources.asciidoc b/docs/orchestrating-elastic-stack-applications/managing-compute-resources.asciidoc index c71f7ceb94..1b7d8c4672 100644 --- a/docs/orchestrating-elastic-stack-applications/managing-compute-resources.asciidoc +++ b/docs/orchestrating-elastic-stack-applications/managing-compute-resources.asciidoc @@ -303,7 +303,7 @@ spec: type: Container ---- -With the above restriction in place, if you create an Elasticsearch object without defining the `resources` section, you will get the following error: +With this limit range in place, if you create an Elasticsearch object without defining the `resources` section, you will get the following error: ................................... Cannot create pod elasticsearch-sample-es-ldbgj48c7r: pods "elasticsearch-sample-es-ldbgj48c7r" is forbidden: minimum memory usage per Container is 3Gi, but request is 2Gi diff --git a/docs/orchestrating-elastic-stack-applications/security/rotate-credentials.asciidoc b/docs/orchestrating-elastic-stack-applications/security/rotate-credentials.asciidoc index 9c88c173bf..5e6f1f2dd3 100644 --- a/docs/orchestrating-elastic-stack-applications/security/rotate-credentials.asciidoc +++ b/docs/orchestrating-elastic-stack-applications/security/rotate-credentials.asciidoc @@ -23,7 +23,7 @@ You can force the auto-generated credentials to be regenerated with new values b kubectl delete secret quickstart-es-elastic-user ---- -CAUTION: If you are using the `elastic` user credentials in your own applications, they will fail to connect to Elasticsearch and Kibana after the above step. It is not recommended to use `elastic` user credentials for production use cases. Always <<{p}-users-and-roles,create your own users with restricted roles>> to access Elasticsearch. +CAUTION: If you are using the `elastic` user credentials in your own applications, they will fail to connect to Elasticsearch and Kibana after you run this command. It is not recommended to use `elastic` user credentials for production use cases. Always <<{p}-users-and-roles,create your own users with restricted roles>> to access Elasticsearch. To regenerate all auto-generated credentials in a namespace, run the following command: @@ -32,4 +32,4 @@ To regenerate all auto-generated credentials in a namespace, run the following c kubectl delete secret -l eck.k8s.elastic.co/credentials=true ---- -CAUTION: The above command regenerates auto-generated credentials of *all* Elastic Stack applications in the namespace. +CAUTION: This command regenerates auto-generated credentials of *all* Elastic Stack applications in the namespace. diff --git a/docs/release-notes/highlights-1.9.0.asciidoc b/docs/release-notes/highlights-1.9.0.asciidoc index 6cefef7707..169f8089c5 100644 --- a/docs/release-notes/highlights-1.9.0.asciidoc +++ b/docs/release-notes/highlights-1.9.0.asciidoc @@ -35,4 +35,4 @@ Following the Elastic Stack licensing changes in `7.11.0`, ECK `1.9.0` moves to === Known issues - On Openshift versions 4.6 and below, when installing or upgrading to 1.9.[0,1], the operator will be stuck in a state of `Installing` within the Openshift UI, and found in a `CrashLoopBackoff` within Kubernetes because of Webhook certificate location mismatches. More information and work-around can be found in link:https://github.com/elastic/cloud-on-k8s/issues/5191[this issue]. -- When using the `elasticsearchRef` mechanism with Elastic Agent in version 7.17 and later its Pods will enter a `CrashLoopBackoff`. The issue will be fixed in ECK 2.0 for Elasticsearch versions 8.0 and above. A workaround is described in link:https://github.com/elastic/cloud-on-k8s/issues/5323#issuecomment-1028954034[this issue]. +- When using the `elasticsearchRef` mechanism with Elastic Agent in version 7.17 and later its Pods will enter a `CrashLoopBackoff`. The issue will be fixed in ECK 2.0 for Elasticsearch versions 8.0 and later. A workaround is described in link:https://github.com/elastic/cloud-on-k8s/issues/5323#issuecomment-1028954034[this issue]. diff --git a/docs/release-notes/highlights-1.9.1.asciidoc b/docs/release-notes/highlights-1.9.1.asciidoc index 370364bd60..d34967c880 100644 --- a/docs/release-notes/highlights-1.9.1.asciidoc +++ b/docs/release-notes/highlights-1.9.1.asciidoc @@ -20,4 +20,4 @@ This release introduces a preemptive measure to mitigate link:https://github.com - When using the Red Hat certified version of the operator, automatic upgrades from previous versions of ECK do not work. To upgrade uninstall the old ECK operator and install the new version manually. Because CRDs remain in place after uninstalling, this operation should not negatively affect existing Elastic Stack deployments managed by ECK. - On Openshift versions 4.6 and below, when installing or upgrading to 1.9.[0,1], the operator will be stuck in a state of `Installing` within the Openshift UI, and seen in a `CrashLoopBackoff` within Kubernetes because of Webhook certificate location mismatches. More information and workaround can be found in link:https://github.com/elastic/cloud-on-k8s/issues/5191[this issue]. -- When using the `elasticsearchRef` mechanism with Elastic Agent in version 7.17 and later its Pods will enter a `CrashLoopBackoff`. The issue will be fixed in ECK 2.0 for Elasticsearch versions 8.0 and above. A workaround is described in link:https://github.com/elastic/cloud-on-k8s/issues/5323#issuecomment-1028954034[this issue]. +- When using the `elasticsearchRef` mechanism with Elastic Agent in version 7.17 and later its Pods will enter a `CrashLoopBackoff`. The issue will be fixed in ECK 2.0 for Elasticsearch versions 8.0 and later. A workaround is described in link:https://github.com/elastic/cloud-on-k8s/issues/5323#issuecomment-1028954034[this issue].