Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@
= Ephemeral storage

[role="_abstract"]
You can use ephemeral storage to run {Project} ({ProjectShort}) without persistently storing data in your {OpenShift} ({OpenShiftShort}) cluster.
You can use ephemeral storage to run {Project} ({ProjectShort}) without persistently storing data in your {OpenShift} cluster.

[WARNING]
If you use ephemeral storage, you might experience data loss if a pod is restarted, updated, or rescheduled onto another node. Use ephemeral storage only for development or testing, and not production environments.
Original file line number Diff line number Diff line change
Expand Up @@ -26,13 +26,13 @@
= Installation size of {OpenShift}

[role="_abstract"]
The size of your {OpenShift} ({OpenShiftShort}) installation depends on the following factors:
The size of your {OpenShift} installation depends on the following factors:

* The number of nodes you want to monitor.
* The number of metrics you want to collect.
* The resolution of metrics.
* The length of time that you want to store the data.

Installation of {Project} ({ProjectShort}) depends on the existing {OpenShift} environment. Ensure that you install monitoring for {OpenStack} on a platform separate from your {OpenStack} environment. You can install {OpenShift} ({OpenShiftShort}) on baremetal or other supported cloud platforms. For more information about installing {OpenShiftShort}, see https://docs.openshift.com/container-platform/{SupportedOpenShiftVersion}/welcome/index.html#cluster-installer-activities[OpenShift Container Platform {SupportedOpenShiftVersion} Documentation].
Installation of {Project} ({ProjectShort}) depends on the existing {OpenShift} environment. Ensure that you install monitoring for {OpenStack} on a platform separate from your {OpenStack} environment. You can install {OpenShift} on baremetal or other supported cloud platforms. For more information about installing {OpenShift}, see https://docs.openshift.com/container-platform/{SupportedOpenShiftVersion}/welcome/index.html#cluster-installer-activities[OpenShift Container Platform {SupportedOpenShiftVersion} Documentation].

The size of your {OpenShiftShort} environment depends on the infrastructure you select. For more information about minimum resources requirements when installing {OpenShiftShort} on baremetal, see https://docs.openshift.com/container-platform/{SupportedOpenShiftVersion}/installing/installing_bare_metal/installing-bare-metal.html#minimum-resource-requirements_installing-bare-metal[Minimum resource requirements] in the _Installing a cluster on bare metal_ guide. For installation requirements of the various public and private cloud platforms which you can install, see the corresponding installation documentation for your cloud platform of choice.
The size of your {OpenShift} environment depends on the infrastructure you select. For more information about minimum resources requirements when installing {OpenShift} on baremetal, see https://docs.openshift.com/container-platform/{SupportedOpenShiftVersion}/installing/installing_bare_metal/installing-bare-metal.html#minimum-resource-requirements_installing-bare-metal[Minimum resource requirements] in the _Installing a cluster on bare metal_ guide. For installation requirements of the various public and private cloud platforms which you can install, see the corresponding installation documentation for your cloud platform of choice.
Original file line number Diff line number Diff line change
Expand Up @@ -26,11 +26,11 @@
= Persistent volumes

[role="_abstract"]
{ProjectShort} uses persistent storage in {OpenShiftShort} to instantiate the volumes dynamically so that Prometheus and ElasticSearch can store metrics and events.
{ProjectShort} uses persistent storage in {OpenShift} to instantiate the volumes dynamically so that Prometheus and ElasticSearch can store metrics and events.

When persistent storage is enabled through the Service Telemetry Operator, the Persistent Volume Claims requested in an {ProjectShort} deployment results in an access mode of RWO (ReadWriteOnce). If your environment contains pre-provisioned persistent volumes, ensure that volumes of RWO are available in the {OpenShiftShort} default configured `storageClass`.
When persistent storage is enabled through the Service Telemetry Operator, the Persistent Volume Claims requested in an {ProjectShort} deployment results in an access mode of RWO (ReadWriteOnce). If your environment contains pre-provisioned persistent volumes, ensure that volumes of RWO are available in the {OpenShift} default configured `storageClass`.

.Additional resources
* For more information about configuring persistent storage for {OpenShiftShort}, see https://docs.openshift.com/container-platform/{SupportedOpenShiftVersion}/storage/understanding-persistent-storage.html[Understanding persistent storage.]
* For more information about configuring persistent storage for {OpenShift}, see https://docs.openshift.com/container-platform/{SupportedOpenShiftVersion}/storage/understanding-persistent-storage.html[Understanding persistent storage.]

* For more information about recommended configurable storage technology in {OpenShift}, see https://docs.openshift.com/container-platform/{SupportedOpenShiftVersion}/scalability_and_performance/optimizing-storage.html#recommended-configurable-storage-technology_persistent-storage[Recommended configurable storage technology].
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@
= Primary parameters of the ServiceTelemetry object

[role="_abstract"]
To deploy {ProjectShort}, you must create an instance of `ServiceTelemetry` in {OpenShift}. The `ServiceTelemetry` object is made up of the following major configuration parameters:
The `ServiceTelemetry` object is made up of the following major configuration parameters:

* `alerting`
* `backends`
Expand All @@ -47,9 +47,9 @@ Support for `servicetelemetry.infra.watch/v1alpha1` was removed from {ProjectSho

Use the `backends` parameter to control which storage backends are available for storage of metrics and events, and to control the enablement of Smart Gateways, as defined by the `clouds` parameter. For more information, see xref:clouds_assembly-installing-the-core-components-of-stf[].

Currently, you can use Prometheus as the metrics backend and ElasticSearch as the events backend.

Currently, you can use Prometheus as the metrics storage backend and ElasticSearch as the events storage backend.

[discrete]
=== Enabling Prometheus as a storage backend for metrics

.Procedure
Expand All @@ -70,11 +70,12 @@ spec:
enabled: true
----

[discrete]
=== Enabling ElasticSearch as a storage backend for events

To enable events support in {ProjectShort}, you must enable the Elastic Cloud for Kubernetes Operator. For more information, see xref:subscribing-to-the-elastic-cloud-on-kubernetes-operator_assembly-installing-the-core-components-of-stf[].

By default, ElasticSearch storage of events is disabled. For more information, see xref:deploying-stf-to-the-openshift-environment-with-elasticsearch_assembly-installing-the-core-components-of-stf[].
By default, storage of events on ElasticSearch is disabled. For more information, see xref:deploying-stf-to-the-openshift-environment-with-elasticsearch_assembly-installing-the-core-components-of-stf[].

.Procedure

Expand Down Expand Up @@ -131,6 +132,12 @@ Each item of the `clouds` parameter represents a cloud instance. The cloud insta

You can use the optional Boolean parameter `debugEnabled` within the `collectors` parameter to enable additional console debugging in the running Smart Gateway pod.

.Additional resources

* For more information about deleting default Smart Gateways, see xref:deleting-the-default-smart-gateways_assembly-completing-the-stf-configuration[].

* For more information about how to configure multiple clouds, see xref:configuring-multiple-clouds_assembly-completing-the-stf-configuration[].

[id="alerting_{context}"]
== alerting

Expand All @@ -149,4 +156,4 @@ Use the `highAvailability` parameter to control the instantiation of multiple co
[id="transports_{context}"]
== transports

Use the `transports` parameter to control the enablement of the message bus for a {ProjectShort} deployment. The only transport currently supported is {MessageBus}. Ensure that {MessageBus} is enabled for correct operation of {ProjectShort}. By default, the `qdr` transport is enabled.
Use the `transports` parameter to control the enablement of the message bus for a {ProjectShort} deployment. The only transport currently supported is {MessageBus}. By default, the `qdr` transport is enabled.
Original file line number Diff line number Diff line change
@@ -1,32 +1,11 @@
// Module included in the following assemblies:
//
// <List assemblies here, each on a new line>

// This module can be included from assemblies using the following include statement:
// include::<path>/con_resource-allocation.adoc[leveloffset=+1]

// The file name and the ID are based on the module title. For example:
// * file name: con_my-concept-module-a.adoc
// * ID: [id='con_my-concept-module-a_{context}']
// * Title: = My concept module A
//
// The ID is used as an anchor for linking to the module. Avoid changing
// it after the module has been published to ensure existing links are not
// broken.
//
// The `context` attribute enables module reuse. Every module's ID includes
// {context}, which ensures that the module has a unique ID even if it is
// reused multiple times in a guide.
//
// In the title, include nouns that are used in the body text. This helps
// readers and search engines find information quickly.
// Do not start the title with a verb. See also _Wording of headings_
// in _The IBM Style Guide_.
[id="resource-allocation_{context}"]
= Resource allocation

[role="_abstract"]
To enable the scheduling of pods within the {OpenShiftShort} infrastructure, you need resources for the components that are running. If you do not allocate enough resources, pods remain in a `Pending` state because they cannot be scheduled.
To enable the scheduling of pods within the {OpenShift} infrastructure, you need resources for the components that are running. If you do not allocate enough resources, pods remain in a `Pending` state because they cannot be scheduled.

The amount of resources that you require to run {ProjectShort} depends on your environment and the number of nodes and clouds that you want to monitor.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
.{ProjectShort} components
[cols="65,15,20"]
|===
|Component |Client |Server ({OpenShift})
|Component |Client ({OpenStack}) |Server ({OpenShift})

|An AMQP 1.x compatible messaging bus to shuttle the metrics to {ProjectShort} for storage in Prometheus
|yes
Expand Down Expand Up @@ -43,20 +43,23 @@

|===

[IMPORTANT]
To ensure that the monitoring platform can report operational problems with your cloud, do not install {ProjectShort} on the same infrastructure that you are monitoring.

[[osp-stf-overview]]
.Service Telemetry Framework architecture overview
image::OpenStack_STF_Overview_37_1019_arch.png[Service Telemetry Framework architecture overview]

ifeval::["{build}" == "downstream"]

[NOTE]
The {Project} data collection components, collectd and Ceilometer, and the transport components, {MessageBus} and Smart Gateway, are fully supported. The data storage components, Prometheus and ElasticSearch, including the Operator artifacts, and visualization component Grafana are community-supported, and are not officially supported.
The {ProjectShort} data collection components, collectd and Ceilometer, and the transport components, {MessageBus} and Smart Gateway, are fully supported. The data storage components, Prometheus and ElasticSearch, including the Operator artifacts, and visualization component Grafana are community-supported, and are not officially supported.

endif::[]

For metrics, on the client side, collectd provides infrastructure metrics without project data, and Ceilometer provides {OpenStack} platform data based on projects or user workload. Both Ceilometer and collectd deliver data to Prometheus by using the {MessageBus} transport, delivering the data through the message bus. On the server side, a Golang application called the Smart Gateway takes the data stream from the bus and exposes it as a local scrape endpoint for Prometheus.

If you plan to collect and store events, collectd or Ceilometer delivers event data to the server side by using the {MessageBus} transport, delivering the data through the message bus. Another Smart Gateway writes the data to the ElasticSearch datastore.
If you plan to collect and store events, collectd and Ceilometer delivers event data to the server side by using the {MessageBus} transport. Another Smart Gateway writes the data to the ElasticSearch datastore.

Server-side {ProjectShort} monitoring infrastructure consists of the following layers:

Expand All @@ -69,9 +72,6 @@ Server-side {ProjectShort} monitoring infrastructure consists of the following l
image::STF_Overview_37_0819_deployment_prereq.png[Server-side STF monitoring infrastructure]


[NOTE]
Do not install {OpenShift} on the same infrastructure that you want to monitor.

.Additional resources

* For more information about how to deploy {OpenShift}, see the https://access.redhat.com/documentation/en-us/openshift_container_platform/{SupportedOpenShiftVersion}/[{OpenShift} product documentation].
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,14 +23,14 @@


[id="creating-a-servicetelemetry-object-in-openshift_{context}"]
= Creating a ServiceTelemetry object in {OpenShiftShort}
= Creating a ServiceTelemetry object in {OpenShift}

[role="_abstract"]
Create a `ServiceTelemetry` object in {OpenShift} to result in the creation of supporting components for a {Project} deployment. For more information, see xref:primary-parameters-of-the-servicetelemetry-object[].
Create a `ServiceTelemetry` object in {OpenShift} to result in the creation of supporting components for a {Project} ({ProjectShort}) deployment. For more information, see xref:primary-parameters-of-the-servicetelemetry-object[].

.Procedure

. To create a `ServiceTelemetry` object that results in an {ProjectShort} deployment that uses the default values, create a `ServiceTelemetry` object with an empty `spec` parameter.
. To create a `ServiceTelemetry` object that results in an {ProjectShort} deployment that uses the default values, create a `ServiceTelemetry` object with an empty `spec` parameter:
+
[source,bash]
----
Expand All @@ -44,7 +44,7 @@ spec: {}
EOF
----
+
To override a default value, you need to define only the parameter that you want to override. In this example, you enable ElasticSearch by setting `enabled` to `true`:
To override a default value, define only the parameter that you want to override. In this example, enable ElasticSearch by setting `enabled` to `true`:
+
[source,yaml]
----
Expand All @@ -62,7 +62,7 @@ spec:
EOF
----
+
Creating a `ServiceTelemetry` object with an empty `spec` parameter results in an {ProjectShort} deployment with the following defaults. To override these defaults, add the configuration to the spec parameter:
Creating a `ServiceTelemetry` object with an empty `spec` parameter results in an {ProjectShort} deployment with the following default settings. To override these defaults, add the configuration to the spec parameter:
+
[source,yaml]
----
Expand Down Expand Up @@ -152,9 +152,10 @@ PLAY RECAP *********************************************************************
localhost : ok=54 changed=0 unreachable=0 failed=0 skipped=19 rescued=0 ignored=0
----

. View the pods and the status of each pod to determine that all workloads are operating nominally:
.Verification
* To determine that all workloads are operating nominally, view the pods and the status of each pod:
+
NOTE: If you set `backends.events.elasticsearch.enabled: true`, the notification Smart Gateways reports `Error` and `CrashLoopBackOff` error messages for a period of time before ElasticSearch starts.
NOTE: If you set `backends.events.elasticsearch.enabled: true`, the notification Smart Gateways report `Error` and `CrashLoopBackOff` error messages for a period of time before ElasticSearch starts.

+
[source,bash,options="nowrap"]
Expand Down