Skip to content

Commit

Permalink
[a11y] Fix "above" occurrences (#5672)
Browse files Browse the repository at this point in the history
* Fix above occurrences

* edits

* edits

* Update docs/orchestrating-elastic-stack-applications/elasticsearch/remote-clusters.asciidoc

Co-authored-by: Thibault Richard <thbkrkr@users.noreply.github.com>

* Update docs/advanced-topics/network-policies.asciidoc

Co-authored-by: Thibault Richard <thbkrkr@users.noreply.github.com>

Co-authored-by: Thibault Richard <thbkrkr@users.noreply.github.com>
  • Loading branch information
alaudazzi and thbkrkr committed May 23, 2022
1 parent 7485024 commit 4a060b6
Show file tree
Hide file tree
Showing 22 changed files with 38 additions and 38 deletions.
2 changes: 1 addition & 1 deletion docs/advanced-topics/network-policies.asciidoc
Expand Up @@ -55,7 +55,7 @@ The minimal set of permissions required are as follows:
|===


Assuming that the Kubernetes API server IP address is `10.0.0.1`, the following network policy implements the rules above.
Assuming that the Kubernetes API server IP address is `10.0.0.1`, the following network policy implements this minimal set of permissions.

NOTE: Run `kubectl cluster-info | grep master` to obtain the API server IP address for your cluster.

Expand Down
2 changes: 1 addition & 1 deletion docs/advanced-topics/openshift.asciidoc
Expand Up @@ -64,7 +64,7 @@ oc new-project elastic # creates the elastic project
oc adm policy add-role-to-user elastic-operator developer -n elastic
----
+
In the example above the user `developer` is allowed to manage Elastic resources in the namespace `elastic`.
In this example the user `developer` is allowed to manage Elastic resources in the namespace `elastic`.

[id="{p}-openshift-deploy-elasticsearch"]
== Deploy an Elasticsearch instance with a route
Expand Down
2 changes: 1 addition & 1 deletion docs/design/0002-global-operator/0002-global-operator.md
Expand Up @@ -54,7 +54,7 @@ Examples of the "placeholder" controllers (additional controllers) that would be

### Hybrid approach

Allow for a hybrid approach where it is possible to enable the components of both operators (global and namespaced) in a single operator in order to simplify small-scale deployments, or vice-versa, where the global operator takes on all responsibilities of the namespaced operator in addition to the installation wide ones. This addresses the main concern above with the drawback that it might not be identical to a production-style deployment.
Allow for a hybrid approach where it is possible to enable the components of both operators (global and namespaced) in a single operator in order to simplify small-scale deployments, or vice-versa, where the global operator takes on all responsibilities of the namespaced operator in addition to the installation wide ones. This addresses the main concern with the drawback that it might not be identical to a production-style deployment.


## Decision Outcome
Expand Down
2 changes: 1 addition & 1 deletion docs/design/0003-associations/0003-associations.md
Expand Up @@ -233,7 +233,7 @@ spec:
snapshot-provider: my-provider
```

Using the above provider we can stamp out instances of the following resource for each matched cluster, and we can potentially keep it up to date with changes to the provider config as well.
Using the `SnapshotRepositoryProvider` we can stamp out instances of the following resource for each matched cluster, and we can potentially keep it up to date with changes to the provider config as well.

```yaml
apiVersion: elasticsearch.k8s.elastic.co/v1alpha1
Expand Down
4 changes: 2 additions & 2 deletions docs/design/0007-local-volume-total-capacity.md
Expand Up @@ -51,11 +51,11 @@ On startup, the node volume provisioner inspects the available disk space (for e

#### PVC/PV binding

When a PVC with storage class `elastic-local` is created, kubernetes PersistentVolume controller will automatically bind the PersistentVolume created above to this PVC (or the PVC created by another node). Storage capacity is taken into consideration here: if the PV spec specifies a 10TB capacity, it will not be bound to a PVC claiming 20TB. However, a 1GB PVC can still be bound to our 10TB PV. Effectively wasting our disk space here.
When a PVC with storage class `elastic-local` is created, the Kubernetes PersistentVolume controller will automatically bind the PersistentVolume created by the node volume provisioner to this PVC (or the PVC created by another node). Storage capacity is taken into consideration here: if the PV spec specifies a 10TB capacity, it will not be bound to a PVC claiming 20TB. However, a 1GB PVC can still be bound to our 10TB PV. Effectively wasting our disk space here.

So how do we avoid wasting disk space in this scenario? As soon as the PV is bound to a PVC, our node volume provisioner gets notified (it's watching PVs it created). By retrieving both PV and matching PVC, it notices the PVC requests only 1GB out of the 10TB available. As a result, it updates the PersistentVolume spec to match those 1GB. The PV stays bound to the same PVC, even though its capacity was changed. The actual volume corresponding to this PV can then be created by the driver running on the node, as done in the current implementation.

We are left with 9.999TB available on the node: the node volume provisioner creates a new PersistentVolume with capacity 9.999TB, that can be bound to any PVC by the kubernetes PersistentVolume controller. If any PV gets deleted, the node volume provisioner reclaims the disk space freed by updating the PV capacity. For instance if the 1GB pod from the example above is deleted, the 9.999TB PV can be updated to 10TB.
We are left with 9.999TB available on the node: the node volume provisioner creates a new PersistentVolume with capacity 9.999TB, that can be bound to any PVC by the kubernetes PersistentVolume controller. If any PV gets deleted, the node volume provisioner reclaims the disk space freed by updating the PV capacity. For example, if the 1GB pod is deleted, the 9.999TB PV can be updated to 10TB.

To summarize:

Expand Down
14 changes: 7 additions & 7 deletions docs/design/0009-pod-reuse-es-restart.md
Expand Up @@ -141,7 +141,7 @@ Note: this does not represent the _entire_ reconciliation loop, it focuses on th
* Check if ES is stopped (`GET /es/status`), or re-queue
* Annotate pod with `restart-phase: start`
* If annotated with `stop-coordinated`
* Apply the same steps as above, but:
* Apply the same steps as for the `stop` annotation, but:
* wait for all ES processes to be stopped instead of only the current pod one
* annotate pod with `restart-phase: start-coordinated`
* Handle pods in a `start` phase
Expand All @@ -154,16 +154,16 @@ Note: this does not represent the _entire_ reconciliation loop, it focuses on th
* Enable shards allocations
* Remove the `restart-phase` annotation from the pod
* If annotated with `start-coordinated`:
* Perform the same steps as above, but wait until *all* ES processes are started before enabling shards allocations
* Perform the same steps as for the `start` annotation, but wait until *all* ES processes are started before enabling shards allocations
* Garbage-collect useless resources
* Configuration secrets that do not match any pod and existed for more than for ex. 15min are safe to delete

#### State machine specifics

It is important in the algorithm outlined above that:
In the reconciliation loop algorithm, it is important to note that:

* any step in a given phase is idempotent. For instance, it should be ok to run steps of the `stop` phase over and over again.
* transition to the next step is resilient to stale cache. If a pod is annotated with the `start` phase, it should be ok to perform all steps of the `stop` phase again (no-op). However the cache cannot go back in time: once we reach the `start` phase we must not perform the `stop` phase at the next iteration. Our apiserver and cache implementation consistency model guarantee this behaviour.
* Any step in a given phase is idempotent. For instance, it should be OK to run steps of the `stop` phase over and over again.
* Transition to the next step is resilient to stale cache. If a pod is annotated with the `start` phase, it should be OK to perform all steps of the `stop` phase again (no-op). However the cache cannot go back in time: once we reach the `start` phase we must not perform the `stop` phase at the next iteration. Our apiserver and cache implementation consistency model guarantee this behaviour.
* the operator can restart at any point: on restart it should get back to the current phase.
* a pod that should be reused will be reflected in the results of the comparison algorithm. However, once its configuration has been updated (but before it is actually restarted), it might not be reflected anymore. The comparison would then be based on the "new" configuration (not yet applied to the ES process), and the pod would require no change. That's OK: the ES process will still eventually be restarted with this correct new configuration, since annotated in the `start` phase.
* if a pod is no longer requested for reuse (for ex. user changed their mind and reverted ES spec to the previous version) but is in the middle of a restart process, it will still go through that restart process. Depending on when the user reverted back the ES spec, compared to the pod current phase in the state machine:
Expand All @@ -181,10 +181,10 @@ It is important in the algorithm outlined above that:

#### Extensions to other use cases (TBD if worth implementing)

* We **don't** need rolling restarts in the context of TLS and license switch, but it seems easy to implement in the algorithm outlined above, to cover other use cases.
* We **don't** need rolling restarts in the context of TLS and license switch, but it seems easy to implement in the reconciliation loop algorithm, to cover other use cases.
* We could catch any `restart-phase: schedule-rolling` set by the user on the Elasticsearch resource, and apply it to all pods of this cluster. This would allow the user to request a cluster restart himself. The user can also apply the annotation to the pods directly: this is the operator "restart API".
* Applying the same annotations mechanism with something such as `stop: true` could allow us (or the user) to stop a particular node that misbehaves.
* Out of scope: it should be possible to adapt the algorithm above to replace a "pod reuse" by a "persistent volume reuse". Pods eligible for reuse at the end of the comparison are also eligible for persistent volume reuse. In such case, we'd need to stop the entire pod instead of stopping the ES process. The new pod would be created at the next reconciliation iteration, with a new config, but would reuse one of the available persistent volumes out there. The choice between pod reuse or PV reuse could be specified in the ES resource spec?
* Out of scope: it should be possible to adapt the reconciliation loop algorithm to replace a "pod reuse" by a "persistent volume reuse". Pods eligible for reuse at the end of the comparison are also eligible for persistent volume reuse. In such case, we'd need to stop the entire pod instead of stopping the ES process. The new pod would be created at the next reconciliation iteration, with a new config, but would reuse one of the available persistent volumes out there. The choice between pod reuse or PV reuse could be specified in the ES resource spec?

## Decision Outcome

Expand Down
4 changes: 2 additions & 2 deletions docs/design/0010-license-checks.md
Expand Up @@ -47,7 +47,7 @@ The license controller MUST create controller licenses only when either a valid
Enterprise license or a valid Enterprise trial license is present in the system. It CAN
issue controller licenses with shorter lifetimes than the Enterprise license and
auto-extend them as needed to limit the impact of accidental license leaks. But license leaks
are currently understood to be much less a concern than cluster licenses leaks as controller licenses have no validity
are currently understood to be much less a concern than cluster licenses leaks as controller licenses have no validity
outside of the operator installation that has created them.


Expand Down Expand Up @@ -86,7 +86,7 @@ secret would need to be deployed into the managed namespace not into
the control plane namespace. Unless of course we run everything in one
namespace anyway or we implement a custom client
that has access to the control plane namespace of the namespace
operator (the latter is the underlying assumption for the graph above).
operator (the latter is the underlying assumption for the license controller graph).

### Positive Consequences

Expand Down
4 changes: 2 additions & 2 deletions docs/design/0011-process-manager.md
Expand Up @@ -81,10 +81,10 @@ Where to run the keystore updater?
How to perform cluster restart?
* Destroy the pod and recreate it
* -- depending on storage class we might not be able to recreate the pod where the volume resides. Only recovery at this point is manual restore from snapshot.
Considering volumes local to a node: during the interval between the pod being delete and a new pod being scheduled to reuse the same volume, there is no guarantee
Considering volumes local to a node: during the interval between the pod being deleted and a new pod being scheduled to reuse the same volume, there is no guarantee
that no other pod will be scheduled on that node, taking up all resources available on the node, preventing the replacing pod to be scheduled.
* Inject a process manager into the standard Elasticsearch container/image
* ++ would allow us to restart without recreating the pod, unless we need to change pod resources or environment variables, in which case the above applies
* ++ would allow us to restart without recreating the pod, unless we need to change pod resources or environment variables, in which case you have to destroy the pod and recreate it
* ~ has the disadvantage of being fairly intrusive and complex (copying binaries through initcontainers, overriding command etc)
* Use a liveness probe to make Kubernetes restart the container
* -- hard to coordinate across an ES cluster
Expand Down
2 changes: 1 addition & 1 deletion docs/operating-eck/installing-eck.asciidoc
Expand Up @@ -81,7 +81,7 @@ helm install elastic-operator elastic/eck-operator -n elastic-system --create-na
[NOTE]
====
The `eck-operator` chart contains several pre-defined profiles to help you install the operator in different configurations. These profiles can be found in the root of the chart directory, prefixed with `profile-`. For example, the restricted configuration shown above is defined in the `profile-restricted.yaml` file, and can be used as follows:
The `eck-operator` chart contains several pre-defined profiles to help you install the operator in different configurations. These profiles can be found in the root of the chart directory, prefixed with `profile-`. For example, the restricted configuration illustrated in the previous code extract is defined in the `profile-restricted.yaml` file, and can be used as follows:
[source,sh]
----
Expand Down
4 changes: 2 additions & 2 deletions docs/operating-eck/operator-config.asciidoc
Expand Up @@ -36,7 +36,7 @@ ECK can be configured using either command line flags or environment variables.
|metrics-port |0 |Prometheus metrics port. Set to 0 to disable the metrics endpoint.
|namespaces |"" |Namespaces in which this operator should manage resources. Accepts multiple comma-separated values. Defaults to all namespaces if empty or unspecified.
|operator-namespace |"" |Namespace the operator runs in. Required.
|set-default-security-context |true | Enables adding a default Pod Security Context to Elasticsearch Pods in Elasticsearch `8.0.0` and above. `fsGroup` is set to `1000` by default to match Elasticsearch container default UID. This behavior might not be appropriate for OpenShift and PSP-secured Kubernetes clusters, so it can be disabled.
|set-default-security-context |true | Enables adding a default Pod Security Context to Elasticsearch Pods in Elasticsearch `8.0.0` and later. `fsGroup` is set to `1000` by default to match Elasticsearch container default UID. This behavior might not be appropriate for OpenShift and PSP-secured Kubernetes clusters, so it can be disabled.
|ubi-only | false | Use only UBI container images to deploy Elastic Stack applications. UBI images are only available from 7.10.0 onward.
|validate-storage-class | true | Specifies whether the operator should retrieve storage classes to verify volume expansion support. Can be disabled if cluster-wide storage class RBAC access is not available.
|webhook-cert-dir |"{TempDir}/k8s-webhook-server/serving-certs" |Path to the directory that contains the webhook server key and certificate.
Expand Down Expand Up @@ -89,7 +89,7 @@ The operator can be started using any of the following methods to achieve the sa
LOG_VERBOSITY=2 METRICS_PORT=6060 NAMESPACES="ns1,ns2,ns3" ./elastic-operator manager
----

If you use a combination of all or some of the methods listed above, the descending order of precedence in case of a conflict is as follows:
If you use a combination of all or some of the these methods, the descending order of precedence in case of a conflict is as follows:

- Flag
- Environment variable
Expand Down
Expand Up @@ -78,7 +78,7 @@ spec:
serviceAccountName: associated-resource-sa
----

In the above example, `associated-resource` can be of any `Kind` that requires an association to be created, for example `Kibana` or `ApmServer`.
In this example, `associated-resource` can be of any `Kind` that requires an association to be created, for example `Kibana` or `ApmServer`.
You can find link:{eck_github}/blob/{eck_release_branch}/config/recipes/associations-rbac/apm_es_kibana_rbac.yaml[a complete example in the ECK GitHub repository].

NOTE: If the `serviceAccountName` is not set, ECK uses the default service account assigned to the pod by the link:https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#service-account-admission-controller[Service Account Admission Controller].
Expand Down
Expand Up @@ -220,4 +220,4 @@ On OpenShift the same workaround can be performed in the UI by clicking on "Unin
If you accidentally upgrade one of your Elasticsearch clusters to a version that does not exist or a version to which a direct upgrade is not possible from your currently deployed version, a validation will prevent you from going back to the previous version.
The reason for this validation is that ECK will not allow downgrades as this is not supported by Elasticsearch and once the data directory of Elasticsearch has been upgraded there is no way back to the old version without a link:https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html[snapshot restore].

The two scenarios described above however are exceptions because Elasticsearch never started up successfully. If you annotate the Elasticsearch resource with `eck.k8s.elastic.co/disable-downgrade-validation=true` ECK will allow you to go back to the old version at your own risk. Please remove the annotation afterwards to prevent accidental downgrades and reduced availability.
These two upgrading scenarios, however, are exceptions because Elasticsearch never started up successfully. If you annotate the Elasticsearch resource with `eck.k8s.elastic.co/disable-downgrade-validation=true` ECK allows you to go back to the old version at your own risk. Remove the annotation afterwards to prevent accidental downgrades and reduced availability.
Expand Up @@ -20,7 +20,7 @@ Most common issues can be identified and resolved by following these instruction
- <<{p}-suspend-elasticsearch>>
- <<{p}-capture-jvm-heap-dumps>>
If you are still unable to find a solution to your problem after following the above instructions, ask for help:
If you are still unable to find a solution to your problem, ask for help:

include::../../help.asciidoc[]

Expand Down

0 comments on commit 4a060b6

Please sign in to comment.