From 6dfcf7538fc90993f247be827e9baba0a1164b83 Mon Sep 17 00:00:00 2001 From: Apoorv Kudesia Date: Wed, 29 Jan 2025 15:44:48 +0530 Subject: [PATCH 01/10] SUMO-251067 | Apoorv | Add. ActiveMQ docs updation --- .../containers-orchestration/activemq.md | 279 ++---------------- 1 file changed, 18 insertions(+), 261 deletions(-) diff --git a/docs/integrations/containers-orchestration/activemq.md b/docs/integrations/containers-orchestration/activemq.md index 9e81898cc8..2b3b595a5f 100644 --- a/docs/integrations/containers-orchestration/activemq.md +++ b/docs/integrations/containers-orchestration/activemq.md @@ -53,10 +53,9 @@ This App has been tested with following ActiveMQ versions: Configuring log and metric collection for the ActiveMQ App includes the following tasks: -### Step 1: Configure Fields in Sumo Logic - -Create the following Fields in Sumo Logic prior to configuring collection. This ensures that your logs and metrics are tagged with relevant metadata, which is required by the app dashboards. For information on setting up fields, see [Sumo Logic Fields](/docs/manage/fields). +### Step 1: Fields in Sumo Logic +Following fields will be created as part of app installation process, For information on setting up fields, see [Sumo Logic Fields](https://help.sumologic.com/docs/manage/fields/) -If you're using ActiveMQ in a Kubernetes environment, create the fields: +If you're using ActiveMQ in a Kubernetes environment, then these fields will be created: * `pod_labels_component` * `pod_labels_environment` * `pod_labels_messaging_system` @@ -76,7 +75,7 @@ If you're using ActiveMQ in a Kubernetes environment, create the fields: -If you're using ActiveMQ in a non-Kubernetes environment, create the fields: +If you're using ActiveMQ in a non-Kubernetes environment, then these fields will be created: * `component` * `environment` * `messaging_system` @@ -270,26 +269,7 @@ This section explains the steps to collect ActiveMQ logs from a Kubernetes envir ``` 5. Sumo Logic Kubernetes collection will automatically start collecting logs from the pods having the annotations defined above. -3. **Add an FER to normalize the fields in Kubernetes environments**. Labels created in Kubernetes environments automatically are prefixed with `pod_labels`. To normalize these for our app to work, we need to create a Field Extraction Rule if not already created for Messaging Application Components. To do so: - 1. [**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Logs > Field Extraction Rules**.
[**New UI**](/docs/get-started/sumo-logic-ui). In the top menu select **Configuration**, and then under **Logs** select **Field Extraction Rules**. You can also click the **Go To...** menu at the top of the screen and select **Field Extraction Rules**. - 2. Click the + Add button on the top right of the table. - 3. The **Add Field Extraction Rule** form will appear. Enter the following options: - * **Rule Name**. Enter the name as **App Observability - Messaging**. - * **Applied At.** Choose **Ingest Time** - * **Scope**. Select **Specific Data** - * **Scope**: Enter the following keyword search expression: - ```sql - pod_labels_environment=* pod_labels_component=messaging - pod_labels_messaging_system=* pod_labels_messaging_cluster=* - ``` - * **Parse Expression**. Enter the following parse expression: - ```sql - if (!isEmpty(pod_labels_environment), pod_labels_environment, "") as environment - | pod_labels_component as component - | pod_labels_messaging_system as messaging_system - | pod_labels_messaging_cluster as messaging_cluster - ``` - +3. **FERs to normalize the fields in Kubernetes environments**. Labels created in Kubernetes environments automatically are prefixed with `pod_labels`. To normalize these for our app to work, we will have a Field Extraction Rule automatically created for Messaging Application Components named as **App Observability - Messaging**
@@ -455,247 +435,24 @@ At this point, ActiveMQ logs should start flowing into Sumo Logic.
-## Installing ActiveMQ Monitors - -This section and below contain instructions for installing Sumo Logic Monitors for ActiveMQ, the app, and descriptions of each of the app dashboards. These instructions assume you have already set up the collection as described in [Collect Logs and Metrics for the ActiveMQ](#collecting-logs-and-metrics-for-activemq). - -* To install these alerts, you need to have the Manage Monitors role capability. -* Alerts can be installed by either importing a JSON file or a Terraform script. +## ActiveMQ Monitors -Sumo Logic provides out-of-the-box alerts available through [Sumo Logic monitors](/docs/alerts/monitors) to help you monitor your ActiveMQ clusters. These alerts are built based on metrics and logs datasets and include preset thresholds based on industry best practices and recommendations. For details, see [ActiveMQ Alerts](#activemq-alerts). - -:::note -There are limits to how many alerts can be enabled - please see the[ Alerts FAQ](/docs/alerts/monitors/monitor-faq) for details. -::: +import CreateMonitors from '../../reuse/apps/create-monitors.md'; -### Method 1: Install the monitors by importing a JSON file: - -1. Download the[ JSON file](https://github.com/SumoLogic/terraform-sumologic-sumo-logic-monitor/blob/main/monitor_packages/ActiveMQ/activemq.json) that describes the monitors. -2. The[ JSON](https://github.com/SumoLogic/terraform-sumologic-sumo-logic-monitor/blob/main/monitor_packages/ActiveMQ/activemq.json) contains the alerts that are based on Sumo Logic searches that do not have any scope filters and therefore will be applicable to all ActiveMQ clusters, the data for which has been collected via the instructions in the previous sections. However, if you would like to restrict these alerts to specific clusters or environments, update the JSON file by replacing the text `messaging_system=activemq` with ``. Custom filter examples: - * For alerts applicable only to a specific cluster, your custom filter would be: `messaging_cluster=activemq-prod.01` - * For alerts applicable to all clusters that start with `activemq-prod`: `messaging_cluster=activemq-prod*` - * For alerts applicable to a specific cluster within a production environment: `messaging_cluster=activemq-1` and `environment=prod`. This assumes you have set the optional environment tag while configuring collection. -3. Go to Manage Data > Alerts > Monitors. -4. Click **Add**. -5. Click Import and then copy-paste the above JSON to import monitors. - -The monitors are disabled by default. Once you have installed the alerts using this method, navigate to the ActiveMQ folder under **Monitors** to configure them. See[ this](/docs/alerts/monitors) document to enable monitors to send notifications to teams or connections. Please see the instructions detailed in Step 4 of this [document](/docs/alerts/monitors/create-monitor). - - -### Method 2: Install the alerts using a Terraform script - -1. Generate an access key and access ID for a user that has the Manage Monitors role capability in Sumo Logic using instructions in [Access Keys](/docs/manage/security/access-keys). To find out which deployment your Sumo Logic account is in, see [Sumo Logic endpoints](/docs/api/getting-started#sumo-logic-endpoints-by-deployment-and-firewall-security). -2. [Download and install Terraform 0.13](https://www.terraform.io/downloads.html) or later. -3. Download the Sumo Logic Terraform package for ActiveMQ alerts: The alerts package is available in the Sumo Logic github[ repository](https://github.com/SumoLogic/terraform-sumologic-sumo-logic-monitor/tree/main/monitor_packages/ActiveMQ). You can either download it through the “git clone” command or as a zip file. -4. Alert Configuration: After the package has been extracted, navigate to the package directory `terraform-sumologic-sumo-logic-monitor/monitor_packages/ActiveMQ/`. - 1. Edit the `activemq.auto.tfvars` file and add the Sumo Logic Access Key, Access Id, and Deployment from Step 1. - ```bash - access_id = "" - access_key = "" - environment = "" - ``` - The Terraform script installs the alerts without any scope filters, if you would like to restrict the alerts to specific clusters or environments, update the variable `'activemq_data_source'`. Custom filter examples: - * A specific cluster `'messaging_cluster=activemq.prod.01'` - * All clusters in an environment `'environment=prod'` - * For alerts applicable to all clusters that start with activemq-prod, your custom filter would be: `'messaging_cluster=activemq-prod*'` - * For alerts applicable to a specific cluster within a production environment, your custom filter would be:`activemq_cluster=activemq-1` and `environment=prod` (This assumes you have set the optional environment tag while configuring collection) - - All monitors are disabled by default on installation, if you would like to enable all the monitors, set the parameter monitors_disabled to false in this file. - - By default, the monitors are configured in a monitor **folder** called “**ActiveMQ**”, if you would like to change the name of the folder, update the monitor folder name in “folder” key at **activemq.auto.tfvars** file. - -5. If you would like the alerts to send email or connection notifications, modify the file **activemq_notifications.auto.tfvars** and populate `connection_notifications` and `email_notifications` as per below examples. -```bash title="Pagerduty Connection Example" -connection_notifications = [ - { - connection_type = "PagerDuty", - connection_id = "", - payload_override = "{\"service_key\": \"your_pagerduty_api_integration_key\",\"event_type\": \"trigger\",\"description\": \"Alert: Triggered {{TriggerType}} for Monitor {{Name}}\",\"client\": \"Sumo Logic\",\"client_url\": \"{{QueryUrl}}\"}", - run_for_trigger_types = ["Critical", "ResolvedCritical"] - }, - { - connection_type = "Webhook", - connection_id = "", - payload_override = "", - run_for_trigger_types = ["Critical", "ResolvedCritical"] - } - ] -``` - -Replace `` with the connection id of the webhook connection. The webhook connection id can be retrieved by calling the[ Monitors API](https://api.sumologic.com/docs/#operation/listConnections). - -For overriding payload for different connection types, see [Set Up Webhook Connections](/docs/alerts/webhook-connections/set-up-webhook-connections). - -```bash title="Email Notifications Example" -email_notifications = [ - { - connection_type = "Email", - recipients = ["abc@example.com"], - subject = "Monitor Alert: {{TriggerType}} on {{Name}}", - time_zone = "PST", - message_body = "Triggered {{TriggerType}} Alert on {{Name}}: {{QueryURL}}", - run_for_trigger_types = ["Critical", "ResolvedCritical"] - } - ] -``` + -6. Install the Alerts: - 1. Navigate to the package directory `terraform-sumologic-sumo-logic-monitor/monitor_packages/ActiveMQ/` and run `terraform init`. This will initialize Terraform and will download the required components. - 2. Run `terraform plan` to view the monitors which will be created/modified by Terraform. - 3. Run `terraform apply`. -7. Post Installation: If you haven’t enabled alerts and/or configured notifications through the Terraform procedure outlined above, we highly recommend enabling alerts of interest and configuring each enabled alert to send notifications to other users or services. This is detailed in Step 4 of [this document](/docs/alerts/monitors/create-monitor). - -There are limits to how many alerts can be enabled. See the [Alerts FAQ](/docs/alerts/monitors/monitor-faq). - - -## Installing the ActiveMQ App - -Locate and install the app you need from the **App Catalog**. If you want to see a preview of the dashboards included with the app before installing, click **Preview Dashboards**. - -1. From the **App Catalog**, search for and select the app. -2. Select the version of the service you're using and click **Add to Library**. -3. To install the app, complete the following fields. - 1. **App Name.** You can retain the existing name, or enter a name of your choice for the app. - 2. **Data Source.** Choose **Enter a Custom Data Filter** and enter a custom ActiveMQ cluster filter. Examples: - * For all ActiveMQ clusters: `messaging_cluster=*` - * For a specific cluster: `messaging_cluster=activemq.dev.01`. - * Clusters within a specific environment: `messaging_cluster=activemq-1` and `environment=prod` (This assumes you have set the optional environment tag while configuring collection). -4. **Advanced**. Select the **Location in Library** (the default is the Personal folder in the library), or click **New Folder** to add a new folder. -5. Click **Add to Library**. - -Once an app is installed, it will appear in your **Personal** folder, or another folder that you specified. From here, you can share it with your organization. - -Panels will start to fill automatically. It's important to note that each panel slowly fills with data matching the time range query and received since the panel was created. Results won't immediately be available, but with a bit of time, you'll see full graphs and maps. - - -## ActiveMQ Alerts - -Sumo Logic has provided out-of-the-box alerts available via[ Sumo Logic monitors](/docs/alerts/monitors) to help you quickly determine if the ActiveMQ database cluster is available and performing as expected. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Alert Type (Metrics/Logs) Alert Name Alert Description Trigger Type (Critical / Warning) Alert Condition Recover Condition
Metrics ActiveMQ - High CPU Usage This alert fires when CPU usage on a node in a ActiveMQ cluster is high. Critical > = 80 < 80
Metrics ActiveMQ - High Host Disk Usage This alert fires when there is high disk usage on a node in an ActiveMQ cluster. Critical > = 80 < 80
Metrics ActiveMQ - High Memory Usage This alert fires when memory usage on a node in an ActiveMQ cluster is high. Critical > = 80 < 80
Metrics ActiveMQ - High Number of File Descriptors in use. This alert fires when the percentage of file descriptors used by a node in an ActiveMQ cluster is high. Critical > = 80 < 80
Metrics ActiveMQ - High Storage Used This alert fires when there is storage usage on a node that is high in an ActiveMQ cluster. Critical > = 80 < 80
Metrics ActiveMQ - High Temp Usage This alert fires when there is high temp usage on a node in an ActiveMQ cluster. Critical > = 80 < 80
Logs ActiveMQ - Maximum Connection This alert fires when one node in ActiveMQ cluster exceeds the maximum allowed client connection limit. Critical > = 1 < 1
Metrics ActiveMQ - No Consumers on Queues This alert fires when an ActiveMQ queue has no consumers. Critical < 1 > = 1
Metrics ActiveMQ - No Consumers on Topics This alert fires when an ActiveMQ topic has no consumers. Critical < 1 > = 1
Logs ActiveMQ - Node Down This alert fires when a node in the ActiveMQ cluster is down. Critical > = 1 < 1
Metrics ActiveMQ - Too Many Connections This alert fires when there are too many connections to a node in an ActiveMQ cluster. Critical > = 1000 < 1000
Metrics ActiveMQ - Too Many Expired Messages on Queues This alert fires when there are too many expired messages on a queue in an ActiveMQ cluster. Critical > = 1000 < 1000
Metrics ActiveMQ - Too Many Expired Messages on Topics This alert fires when there are too many expired messages on a topic in an ActiveMQ cluster. Critical > = 1000 < 1000
Metrics ActiveMQ - Too Many Unacknowledged Messages This alert fires when there are too many unacknowledged messages on a node in an ActiveMQ cluster. Critical > = 1000 < 1000
+### ActiveMQ alerts +| Alert Name | Alert Description and conditions | Alert Condition | Recover Condition | +|:--|:--|:--|:--| +| `ActiveMQ - High CPU Usage Alert` | This alert gets triggered when CPU usage on a node in a ActiveMQ cluster is high. | Count >= 80 | Count < 80 | +| `ActiveMQ - High Memory Usage Alert` | This alert gets triggered when memory usage on a node in a ActiveMQ cluster is high. | Count >= 80 | Count < 80 | +| `ActiveMQ - High Storage Used Alert` | This alert gets triggered when there is high store usage on a node in a ActiveMQ cluster. | Count >= 80 | Count < 80 | +| `ActiveMQ - Maximum Connection Alert` | This alert gets triggered when one node in ActiveMQ cluster exceeds the maximum allowed client connection limit. | Count >= 1 | Count < 1 | +| `ActiveMQ - No Consumers on Queues Alert` | This alert gets triggered when a ActiveMQ queue has no consumers. | Count < 1 | Count >= 1 | +| `ActiveMQ - Node Down Alert` | This alert gets triggered when a node in the ActiveMQ cluster is down. | Count >= 1 | Count < 1 | +| `ActiveMQ - Too Many Connections Alert` | This alert gets triggered when there are too many connections to a node in a ActiveMQ cluster. | Count >= 1000 | Count < 1000 | ## Viewing the ActiveMQ Dashboards From 6b5cc5efe44a2dd99c635d3e8c74300c9ab5cd86 Mon Sep 17 00:00:00 2001 From: Apoorv Kudesia Date: Wed, 29 Jan 2025 16:52:46 +0530 Subject: [PATCH 02/10] SUMO-251067 | Apoorv | Fix. PR comments --- docs/integrations/containers-orchestration/activemq.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/integrations/containers-orchestration/activemq.md b/docs/integrations/containers-orchestration/activemq.md index 2b3b595a5f..a5e86ab4b6 100644 --- a/docs/integrations/containers-orchestration/activemq.md +++ b/docs/integrations/containers-orchestration/activemq.md @@ -55,7 +55,7 @@ Configuring log and metric collection for the ActiveMQ App includes the followin ### Step 1: Fields in Sumo Logic -Following fields will be created as part of app installation process, For information on setting up fields, see [Sumo Logic Fields](https://help.sumologic.com/docs/manage/fields/) +Following [fields](https://help.sumologic.com/docs/manage/fields/) will be created as part of app installation process. Date: Thu, 30 Jan 2025 14:53:00 +0530 Subject: [PATCH 03/10] SUMO-251067 | Apoorv | Add. Kafka docs updation --- .../containers-orchestration/activemq.md | 23 +-- .../containers-orchestration/kafka.md | 144 ++---------------- 2 files changed, 20 insertions(+), 147 deletions(-) diff --git a/docs/integrations/containers-orchestration/activemq.md b/docs/integrations/containers-orchestration/activemq.md index a5e86ab4b6..1f2fd76e21 100644 --- a/docs/integrations/containers-orchestration/activemq.md +++ b/docs/integrations/containers-orchestration/activemq.md @@ -55,35 +55,22 @@ Configuring log and metric collection for the ActiveMQ App includes the followin ### Step 1: Fields in Sumo Logic -Following [fields](https://help.sumologic.com/docs/manage/fields/) will be created as part of app installation process. - +Following [fields](https://help.sumologic.com/docs/manage/fields/) will always be created automatically as a part of app installation process: - - -If you're using ActiveMQ in a Kubernetes environment, then these fields will be created: * `pod_labels_component` * `pod_labels_environment` * `pod_labels_messaging_system` * `pod_labels_messaging_cluster` - - -If you're using ActiveMQ in a non-Kubernetes environment, then these fields will be created: +If you're using ActiveMQ in a non-Kubernetes environment, these additional fields will get created automatically as a part of app installation process: * `component` * `environment` * `messaging_system` * `messaging_cluster` * `pod` - - +For information on setting up fields, see [Sumo Logic Fields](/docs/manage/fields). ### Step 2: Configure ActiveMQ Logs and Metrics Collection @@ -269,7 +256,7 @@ This section explains the steps to collect ActiveMQ logs from a Kubernetes envir ``` 5. Sumo Logic Kubernetes collection will automatically start collecting logs from the pods having the annotations defined above. -3. **FERs to normalize the fields in Kubernetes environments**. Labels created in Kubernetes environments automatically are prefixed with `pod_labels`. To normalize these for our app to work, we will have a Field Extraction Rule automatically created for Messaging Application Components named as **App Observability - Messaging** +3. **FER to normalize the fields in Kubernetes environments**. Labels created in Kubernetes environments automatically are prefixed with `pod_labels`. To normalize these for our app to work, we will have a Field Extraction Rule automatically created with named as **AppObservabilityMessagingActiveMQFER** @@ -440,7 +427,7 @@ At this point, ActiveMQ logs should start flowing into Sumo Logic. import CreateMonitors from '../../reuse/apps/create-monitors.md'; - + 10. There are limits to how many alerts can be enabled ### ActiveMQ alerts diff --git a/docs/integrations/containers-orchestration/kafka.md b/docs/integrations/containers-orchestration/kafka.md index 49aa1b000c..ab0739ceea 100644 --- a/docs/integrations/containers-orchestration/kafka.md +++ b/docs/integrations/containers-orchestration/kafka.md @@ -67,38 +67,24 @@ messaging_cluster=* messaging_system="kafka" \ This section provides instructions for configuring log and metric collection for the Sumo Logic App for Kafka. -### Configure Fields in Sumo Logic +### Fields in Sumo Logic -Create the following Fields in Sumo Logic prior to configuring collection. This ensures that your logs and metrics are tagged with relevant metadata, which is required by the app dashboards. For information on setting up fields, see [Sumo Logic Fields](/docs/manage/fields). +Following [fields](https://help.sumologic.com/docs/manage/fields/) will always be created automatically as a part of app installation process: - - - - -If you're using Kafka in a Kubernetes environment, create the fields: * `pod_labels_component` * `pod_labels_environment` * `pod_labels_messaging_system` * `pod_labels_messaging_cluster` - - -If you're using Kafka in a non-Kubernetes environment, create the fields: +If you're using ActiveMQ in a non-Kubernetes environment, these additional fields will get created automatically as a part of app installation process: * `component` * `environment` * `messaging_system` * `messaging_cluster` * `pod` - - +For information on setting up fields, see [Sumo Logic Fields](/docs/manage/fields). ### Configure Collection for Kafka @@ -230,30 +216,7 @@ This section explains the steps to collect Kafka logs from a Kubernetes environm kubectl describe pod ``` 5. Sumo Logic Kubernetes collection will automatically start collecting logs from the pods having the annotations defined above. -3. **Add an FER to normalize the fields in Kubernetes environments**. Labels created in Kubernetes environments automatically are prefixed with `pod_labels`. To normalize these for our app to work, we need to create a Field Extraction Rule if not already created for Messaging Application Components. To do so: - 1. [**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Logs > Field Extraction Rules**.
[**New UI**](/docs/get-started/sumo-logic-ui). In the top menu select **Configuration**, and then under **Logs** select **Field Extraction Rules**. You can also click the **Go To...** menu at the top of the screen and select **Field Extraction Rules**. - 2. Click the **+ Add** button on the top right of the table. - 3. The **Add Field Extraction Rule** form will appear. Enter the following options: - * **Rule Name**. Enter the name as **App Component Observability - Messaging.** - * **Applied At**. Choose Ingest Time - * **Scope**. Select Specific Data - * Scope: Enter the following keyword search expression: - ```sql - pod_labels_environment=* pod_labels_component=messaging - pod_labels_messaging_system=kafka pod_labels_messaging_cluster=* - ``` - * **Parse Expression**. Enter the following parse expression: - ```sql - if (!isEmpty(pod_labels_environment), pod_labels_environment, "") as environment - | pod_labels_component as component - | pod_labels_messaging_system as messaging_system - | pod_labels_messaging_cluster as messaging_cluster - ``` - 4. Click **Save** to create the rule. - 5. Verify logs are flowing into Sumo Logic by running the following logs query: - ```sql - component="messaging" and messaging_system="kafka" - ``` +3. **FER to normalize the fields in Kubernetes environments**. Labels created in Kubernetes environments automatically are prefixed with `pod_labels`. To normalize these for our app to work, we will have a Field Extraction Rule automatically created with named as **AppObservabilityMessagingKafkaFER**
@@ -390,93 +353,6 @@ At this point, Kafka metrics and logs should start flowing into Sumo Logic.
-## Installing Kafka Alerts - -This section and below provide instructions for installing the Sumo App and Alerts for Kafka and descriptions of each of the app dashboards. These instructions assume you have already set up the collection as described in [Collect Logs and Metrics for Kafka](#collecting-logs-and-metrics-for-kafka). - -#### Pre-Packaged Alerts - -Sumo Logic has provided out-of-the-box alerts available through [Sumo Logic monitors](/docs/alerts/monitors) to help you quickly determine if the Kafka cluster is available and performing as expected. These alerts are built based on metrics datasets and have preset thresholds based on industry best practices and recommendations. See [Kafka Alerts](#kafka-alerts) for more details. - -* To install these alerts, you need to have the Manage Monitors role capability. -* Alerts can be installed by either importing a JSON or a Terraform script. -* There are limits to how many alerts can be enabled - see the [Alerts FAQ](/docs/alerts/monitors/monitor-faq) for details. - - -### Method A: Importing a JSON file - -1. Download a[ JSON file](https://github.com/SumoLogic/terraform-sumologic-sumo-logic-monitor/blob/main/monitor_packages/kubernetes/kubernetes.json) that describes the monitors. - 1. The [JSON](https://github.com/SumoLogic/terraform-sumologic-sumo-logic-monitor/blob/main/monitor_packages/Kafka/Kafka_Alerts.json) contains the alerts that are based on Sumo Logic searches that do not have any scope filters and therefore will be applicable to all Kafka clusters, the data for which has been collected via the instructions in the previous sections. However, if you would like to restrict these alerts to specific clusters or environments, update the JSON file by replacing the text `'messaging_system=kafka `with `'`. Custom filter examples: - * For alerts applicable only to a specific cluster, your custom filter would be: `messaging_cluster=Kafka-prod.01` - * For alerts applicable to all clusters that start with Kafka-prod, your custom filter would be: `messaging_cluster=Kafka-prod*` - * For alerts applicable to a specific cluster within a production environment, your custom filter would be: `messaging_cluster=Kafka-1` and `environment=prod` (This assumes you have set the optional environment tag while configuring collection) - 2. [**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Monitoring > Monitors**.
[**New UI**](/docs/get-started/sumo-logic-ui). In the main Sumo Logic menu, select **Alerts > Monitors**. You can also click the **Go To...** menu at the top of the screen and select **Monitors**. - 3. Click **Add** - 4. Click Import to import monitors from the JSON above. - -The monitors are disabled by default. Once you have installed the alerts using this method, navigate to the Kafka folder under Monitors to configure them. See [this](/docs/alerts/monitors) document to enable monitors. To send notifications to teams or connections, see the instructions detailed in Step 4 of this [document](/docs/alerts/monitors/create-monitor). - -### Method B: Using a Terraform script - -1. Generate an access key and access ID for a user that has the Manage Monitors role capability in Sumo Logic using instructions in [Access Keys](/docs/manage/security/access-keys). Identify which deployment your Sumo Logic account is in using [this link](/docs/api/getting-started#sumo-logic-endpoints-by-deployment-and-firewall-security). -2. [Download and install Terraform 0.13](https://www.terraform.io/downloads.html) or later. -3. Download the Sumo Logic Terraform package for Kafka alerts. The alerts package is available in the Sumo Logic [GitHub repository](https://github.com/SumoLogic/terraform-sumologic-sumo-logic-monitor/tree/main/monitor_packages/Kafka). You can either download it through the “git clone” command or as a zip file. -4. Alert Configuration. After the package has been extracted, navigate to the package directory `terraform-sumologic-sumo-logic-monitor/monitor_packages/Kafka`. - 1. Edit the `monitor.auto.tfvars` file and add the Sumo Logic Access Key, Access Id and Deployment from Step 1. - ```bash - access_id = "" - access_key = "" - environment = "" - ``` - 2. The Terraform script installs the alerts without any scope filters, if you would like to restrict the alerts to specific clusters or environments, update the variable `’kafka_data_source’`. Custom filter examples: - * For alerts applicable only to a specific cluster, your custom filter would be: `messaging_cluster=Kafka-prod.01` - * For alerts applicable to all clusters that start with Kafka-prod, your custom filter would be: `messaging_cluster=Kafka-prod*` - * For alerts applicable to a specific cluster within a production environment, your custom filter would be: `messaging_cluster=Kafka-1` and `environment=prod`. This assumes you have set the optional environment tag while configuring collection. - -All monitors are disabled by default on installation, if you would like to enable all the monitors, set the parameter `monitors_disabled` to `false` in this file. - -By default, the monitors are configured in a monitor folder called “Kafka”, if you would like to change the name of the folder, update the monitor folder name in this file. - -5. To send email or connection notifications, modify the file `notifications.auto.tfvars` file and fill in the `connection_notifications` and `email_notifications` sections. See the examples for PagerDuty and email notifications below. See [this document](/docs/alerts/webhook-connections/set-up-webhook-connections) for creating payloads with other connection types. - -```bash title="Pagerduty Connection Example" -connection_notifications = [ - { - connection_type = "PagerDuty", - connection_id = "", - payload_override = "{\"service_key\": \"your_pagerduty_api_integration_key\",\"event_type\": \"trigger\",\"description\": \"Alert: Triggered {{TriggerType}} for Monitor {{Name}}\",\"client\": \"Sumo Logic\",\"client_url\": \"{{QueryUrl}}\"}", - run_for_trigger_types = ["Critical", "ResolvedCritical"] - }, - { - connection_type = "Webhook", - connection_id = "", - payload_override = "", - run_for_trigger_types = ["Critical", "ResolvedCritical"] - } - ] -``` - -Replace `` with the connection id of the webhook connection. The webhook connection id can be retrieved by calling the[ Monitors API](https://api.sumologic.com/docs/#operation/listConnections). - -```bash title="Email Notifications Example" -email_notifications = [ - { - connection_type = "Email", - recipients = ["abc@example.com"], - subject = "Monitor Alert: {{TriggerType}} on {{Name}}", - time_zone = "PST", - message_body = "Triggered {{TriggerType}} Alert on {{Name}}: {{QueryURL}}", - run_for_trigger_types = ["Critical", "ResolvedCritical"] - } - ] -``` - -6. Install the Alerts - 1. Navigate to the package directory `terraform-sumologic-sumo-logic-monitor/monitor_packages/Kafka/` and run terraform init. This will initialize Terraform and will download the required components. - 2. Run `terraform plan` to view the monitors which will be created/modified by Terraform. - 3. Run `terraform apply`. -7. **Post Installation.** If you haven’t enabled alerts and/or configured notifications through the Terraform procedure outlined above, we highly recommend enabling alerts of interest and configuring each enabled alert to send notifications to other people or services. This is detailed in Step 4 of[ this document](/docs/alerts/monitors/create-monitor). - ## Installing the Kafka App @@ -726,6 +602,16 @@ Use this dashboard to: ## Kafka Alerts +#### Pre-Packaged Alerts + +Sumo Logic has provided out-of-the-box alerts available through [Sumo Logic monitors](/docs/alerts/monitors) to help you quickly determine if the Kafka cluster is available and performing as expected. These alerts are built based on metrics datasets and have preset thresholds based on industry best practices and recommendations. + + +* There are limits to how many alerts can be enabled - see the [Alerts FAQ](/docs/alerts/monitors/monitor-faq) for details. +:::note permissions required +To install these alerts, you need to have the [Manage Monitors role capability](/docs/manage/users-roles/roles/role-capabilities/#alerting). +::: + | Alert Name | Alert Description and conditions | Alert Condition | Recover Condition | |:---------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|:-------------------| | Kafka - High Broker Disk Utilization | This alert fires when we detect that a disk on a broker node is more than 85% full. | `>=`85 | < 85 | From c46c29f5d597087c5abf3bd209dd52e5c838ed5d Mon Sep 17 00:00:00 2001 From: Apoorv Kudesia Date: Thu, 30 Jan 2025 16:28:22 +0530 Subject: [PATCH 04/10] SUMO-251067 | Apoorv | Add. RabbitMQ docs updation --- .../containers-orchestration/activemq.md | 4 + .../containers-orchestration/kafka.md | 2 +- .../containers-orchestration/rabbitmq.md | 156 +++--------------- 3 files changed, 26 insertions(+), 136 deletions(-) diff --git a/docs/integrations/containers-orchestration/activemq.md b/docs/integrations/containers-orchestration/activemq.md index 1f2fd76e21..e5944f948d 100644 --- a/docs/integrations/containers-orchestration/activemq.md +++ b/docs/integrations/containers-orchestration/activemq.md @@ -429,6 +429,10 @@ import CreateMonitors from '../../reuse/apps/create-monitors.md'; 10. There are limits to how many alerts can be enabled +:::note permissions required +To install these monitors, you need to have the [Manage Monitors role capability](/docs/manage/users-roles/roles/role-capabilities/#alerting). +::: + ### ActiveMQ alerts | Alert Name | Alert Description and conditions | Alert Condition | Recover Condition | diff --git a/docs/integrations/containers-orchestration/kafka.md b/docs/integrations/containers-orchestration/kafka.md index ab0739ceea..c0282def67 100644 --- a/docs/integrations/containers-orchestration/kafka.md +++ b/docs/integrations/containers-orchestration/kafka.md @@ -77,7 +77,7 @@ Following [fields](https://help.sumologic.com/docs/manage/fields/) will always b * `pod_labels_messaging_cluster` -If you're using ActiveMQ in a non-Kubernetes environment, these additional fields will get created automatically as a part of app installation process: +If you're using Kafka in a non-Kubernetes environment, these additional fields will get created automatically as a part of app installation process: * `component` * `environment` * `messaging_system` diff --git a/docs/integrations/containers-orchestration/rabbitmq.md b/docs/integrations/containers-orchestration/rabbitmq.md index a53acbcc89..eda1a01122 100644 --- a/docs/integrations/containers-orchestration/rabbitmq.md +++ b/docs/integrations/containers-orchestration/rabbitmq.md @@ -51,38 +51,26 @@ Host: broker-1 Name: /var/log/rabbitmq/rabbit.log Category: logfile This section provides instructions for configuring log and metric collection for the Sumo Logic App for RabbitMQ. -### Step 1: Configure Fields in Sumo Logic +### Step 1: Fields in Sumo Logic -Create the following Fields in Sumo Logic prior to configuring collection. This ensures that your logs and metrics are tagged with relevant metadata, which is required by the app dashboards. For information on setting up fields, see [Sumo Logic Fields](/docs/manage/fields). +Following [fields](https://help.sumologic.com/docs/manage/fields/) will always be created automatically as a part of app installation process: - +* `pod_labels_component` +* `pod_labels_environment` +* `pod_labels_messaging_system` +* `pod_labels_messaging_cluster` - -If you're using RabbitMQ in a Kubernetes environment, create the fields: -* pod_labels_component -* pod_labels_environment -* pod_labels_messaging_system -* pod_labels_messaging_cluster +If you're using RabbitMQ in a non-Kubernetes environment, these additional fields will get created automatically as a part of app installation process: +* `component` +* `environment` +* `messaging_system` +* `messaging_cluster` +* `pod` - - +For information on setting up fields, see [Sumo Logic Fields](/docs/manage/fields). -If you're using RabbitMQ in a non-Kubernetes environment, create the fields: -* component -* environment -* messaging_system -* messaging_cluster -* pod - - ### Step 2: Configure Collection for RabbitMQ @@ -211,26 +199,7 @@ For all other parameters see [this doc](/docs/send-data/collect-from-other-data- kubectl describe pod ``` 5. Sumo Logic Kubernetes collection will automatically start collecting logs from the pods having the annotations defined above. -3. **Add an FER to normalize the fields in Kubernetes environments**. Labels created in Kubernetes environments automatically are prefixed with `pod_labels`. To normalize these for our app to work, we need to create a Field Extraction Rule if not already created for Messaging Application Components. To do so: - 1. Go to **Manage Data > Logs > Field Extraction Rules**. - 2. Click the + Add button on the top right of the table. - 3. The **Add Field Extraction Rule** form will appear: - 4. Enter the following options: - * **Rule Name**. Enter the name as **App Observability - Messaging**. - * **Applied At.** Choose **Ingest Time** - * **Scope**. Select **Specific Data** - * **Scope**: Enter the following keyword search expression: - ```sql - pod_labels_environment=* pod_labels_component=messaging pod_labels_messaging_system=* pod_labels_messaging_cluster=* - ``` - * **Parse Expression**.Enter the following parse expression: - ```sql - | if (!isEmpty(pod_labels_environment), pod_labels_environment, "") as environment - | pod_labels_component as component - | pod_labels_messaging_system as messaging_system - | pod_labels_messaging_cluster as messaging_cluster - ``` - 5. Click **Save** to create the rule. +3. **FER to normalize the fields in Kubernetes environments**. Labels created in Kubernetes environments automatically are prefixed with `pod_labels`. To normalize these for our app to work, we will have a Field Extraction Rule automatically created with named as **AppObservabilityMessagingRabbitMQFER** @@ -361,98 +330,15 @@ At this point, RabbitMQ logs should start flowing into Sumo Logic. -## Installing Monitors - -These instructions assume you have already set up collection as described in the [Collect Logs and Metrics for RabbitMQ](#collecting-logs-and-metrics-for-rabbitmq). - -Sumo Logic has provided pre-packaged alerts available through [Sumo Logic monitors](/docs/alerts/monitors) to help you proactively determine if a RabbitMQ cluster is available and performing as expected. These monitors are based on metric and log data and include pre-set thresholds that reflect industry best practices and recommendations. For more information about individual alerts, see [RabbitMQ Alerts](#rabbitmq-alerts). - -To install these monitors, you must have the **Manage Monitors** role capability. - -You can install monitors by importing a JSON file or using a Terraform script. - -There are limits to how many alerts can be enabled. For more information, see [Monitors](/docs/alerts/monitors/create-monitor) for details. - - -#### Method A: Install Monitors by importing a JSON file - -1. Download the [JSON file](https://github.com/SumoLogic/terraform-sumologic-sumo-logic-monitor/blob/main/monitor_packages/RabbitMQ/rabbitmq.json) that describes the monitors. -2. The [JSON](https://github.com/SumoLogic/terraform-sumologic-sumo-logic-monitor/blob/main/monitor_packages/RabbitMQ/rabbitmq.json) contains the alerts that are based on Sumo Logic searches that do not have any scope filters and therefore will be applicable to all RabbitMQ clusters, the data for which has been collected via the instructions in the previous sections. However, if you would like to restrict these alerts to specific clusters or environments, update the JSON file by replacing the text `messaging_cluster=*` with ``. Custom filter examples: - * For alerts applicable only to a specific cluster, your custom filter would be: `messaging_cluster=dev-rabbitmq01` - * For alerts applicable to all clusters that start with RabbitMQ-prod, your custom filter would be: `messaging_cluster=RabbitMQ-prod*` - * For alerts applicable to a specific cluster within a production environment, your custom filter would be: `messaging_cluster=dev-rabbitmq01 AND environment=prod` (This assumes you have set the optional environment tag while configuring collection) -3. [**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Monitoring > Monitors**.
[**New UI**](/docs/get-started/sumo-logic-ui). In the main Sumo Logic menu, select **Alerts > Monitors**. You can also click the **Go To...** menu at the top of the screen and select **Monitors**. -4. Click **Add**. -5. Click **Import**. -6. On the **Import Content popup**, enter **RabbitMQ** in the Name field, paste in the JSON into the the popup, and click **Import**. -7. The monitors are created in a "RabbitMQ" folder. The monitors are disabled by default. See the [Monitors](/docs/alerts/monitors) topic for information about enabling monitors and configuring notifications or connections. - -#### Method B: Install Monitors using a Terraform script - -1. Generate an access key and access ID for a user that has the **Manage Monitors** role capability. For instructions see [Access Keys](/docs/manage/security/access-keys). -2. Download [Terraform 0.13](https://www.terraform.io/downloads.html) or later, and install it. -3. Download the Sumo Logic Terraform package for MySQL monitors: The alerts package is available in the Sumo Logic GitHub [repository](https://github.com/SumoLogic/terraform-sumologic-sumo-logic-monitor/tree/main/monitor_packages/mysql). You can either download it using the git clone command or as a zip file. -4. Alert Configuration: After extracting the package, navigate to the terraform-sumologic-sumo-logic-monitor/monitor_packages/RabbitMQ/ directory. - -Edit the rabbitmq.auto.tfvars file and add the Sumo Logic Access Key and Access ID from Step 1 and your Sumo Logic deployment. If you're not sure of your deployment, see [Sumo Logic Endpoints and Firewall Security](/docs/api/getting-started#sumo-logic-endpoints-by-deployment-and-firewall-security). -```bash -access_id = "" -access_key = "" -environment = "" -``` - -The Terraform script installs the alerts without any scope filters, if you would like to restrict the alerts to specific clusters or environments, update the `rabbitmq_data_source` variable. For example: -* To configure alerts for A specific cluster, set `rabbitmq_data_source` to something like: messaging_cluster=rabbitmq.prod.01 -* To configure alerts for All clusters in an environment, set `rabbitmq_data_source` to something like: environment=prod -* To configure alerts for Multiple clusters using a wildcard, set `rabbitmq_data_source` to something like: `messaging_cluster=rabbitmq-prod*` -* To configure alerts for A specific cluster within a specific environment, set `rabbitmq_data_source` to something like: `messaging_cluster=rabbitmq-1 and environment=prod`. This assumes you have configured and applied Fields as described in Step 1: Configure Fields of the Sumo Logic of the Collect Logs and Metrics for RabbitMQ. - -All monitors are disabled by default on installation. To enable all of the monitors, set the monitors_disabled parameter to false. - -By default, the monitors will be located in a "RabbitMQ" folder on the **Monitors** page. To change the name of the folder, update the monitor folder name in the folder variable in the rabbitmq.auto.tfvars file. - -5. If you want the alerts to send email or connection notifications, edit the `rabbitmq_notifications.auto.tfvars` file to populate the `connection_notifications` and `email_notifications` sections. Examples are provided below. - -In the variable definition below, replace `` with the connection ID of the Webhook connection. You can obtain the Webhook connection ID by calling the [Monitors API](https://api.sumologic.com/docs/#operation/listConnections). - -```bash title="Pagerduty connection example" -connection_notifications = [ - { - connection_type = "PagerDuty", - connection_id = "", - payload_override = "{\"service_key\": \"your_pagerduty_api_integration_key\",\"event_type\": \"trigger\",\"description\": \"Alert: Triggered {{TriggerType}} for Monitor {{Name}}\",\"client\": \"Sumo Logic\",\"client_url\": \"{{QueryUrl}}\"}", - run_for_trigger_types = ["Critical", "ResolvedCritical"] - }, - { - connection_type = "Webhook", - connection_id = "", - payload_override = "", - run_for_trigger_types = ["Critical", "ResolvedCritical"] - } - ] -``` - -For information about overriding the payload for different connection types, see [Set Up Webhook Connections](/docs/alerts/webhook-connections/set-up-webhook-connections). - -```bash title="Email notifications example" -email_notifications = [ - { - connection_type = "Email", - recipients = ["abc@example.com"], - subject = "Monitor Alert: {{TriggerType}} on {{Name}}", - time_zone = "PST", - message_body = "Triggered {{TriggerType}} Alert on {{Name}}: {{QueryURL}}", - run_for_trigger_types = ["Critical", "ResolvedCritical"] - } - ] -``` - -6. Install Monitors: - 1. Navigate to the `terraform-sumologic-sumo-logic-monitor/monitor_packages/rabbitmq/` directory and run terraform init. This will initialize Terraform and download the required components. - 2. Run `terraform plan` to view the monitors that Terraform will create or modify. - 3. Run `terraform apply`. +## RabbitMQ Monitors +import CreateMonitors from '../../reuse/apps/create-monitors.md'; + + 10. There are limits to how many alerts can be enabled +:::note permissions required +To install these monitors, you need to have the [Manage Monitors role capability](/docs/manage/users-roles/roles/role-capabilities/#alerting). +::: ## Installing the RabbitMQ App This section demonstrates how to install the RabbitMQ App. From 70414633225ee87e1f7c7c164744ffde4540c315 Mon Sep 17 00:00:00 2001 From: Apoorv Kudesia Date: Thu, 30 Jan 2025 18:27:29 +0530 Subject: [PATCH 05/10] SUMO-251067 | Apoorv | Add. Github, Gitlab and Bitbucket docs updation --- docs/integrations/app-development/bitbucket.md | 4 +--- docs/integrations/app-development/github.md | 5 +---- docs/integrations/app-development/gitlab.md | 4 +--- 3 files changed, 3 insertions(+), 10 deletions(-) diff --git a/docs/integrations/app-development/bitbucket.md b/docs/integrations/app-development/bitbucket.md index 277a64aefd..7641bdca57 100644 --- a/docs/integrations/app-development/bitbucket.md +++ b/docs/integrations/app-development/bitbucket.md @@ -144,10 +144,8 @@ For reference: This is how the [bitbucket-pipelines.yml](https://bitbucket.org/a ### Step 4: Enable Bitbucket Event-Key tagging at Sumo Logic -Sumo Logic needs to understand the event type for incoming events (for example, repo:push events). To enable this, the [X-Event-Key](https://confluence.atlassian.com/bitbucket/event-payloads-740262817.html#EventPayloads-HTTPheaders) event type needs to be enabled. To enable this, perform the following steps in the Sumo Logic console: +Sumo Logic needs to understand the event type for incoming events (for example, repo:push events). To enable this, the [X-Event-Key](https://confluence.atlassian.com/bitbucket/event-payloads-740262817.html#EventPayloads-HTTPheaders) event type is automatically added to the [Fields](/docs/manage/fields) during installation of the app. -1. [**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Logs > Fields**.
[**New UI**](/docs/get-started/sumo-logic-ui). In the top menu select **Configuration**, and then under **Logs** select **Fields**. You can also click the **Go To...** menu at the top of the screen and select **Fields**. -2. Add Field ‎**X-Event-Key**‎.
Bitbucket ## Installing the Bitbucket App diff --git a/docs/integrations/app-development/github.md b/docs/integrations/app-development/github.md index 3b6ce86e1c..d7df402cf4 100644 --- a/docs/integrations/app-development/github.md +++ b/docs/integrations/app-development/github.md @@ -158,10 +158,7 @@ To configure a GitHub Webhook: ### Enable GitHub Event tagging at Sumo Logic -Sumo Logic needs to understand the event type for incoming events. To enable this, the [x-github-event](https://docs.github.com/en/developers/webhooks-and-events/webhook-events-and-payloads) event type needs to be enabled. To enable this, perform the following steps in the Sumo Logic console: - -1. [**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Logs > Fields**.
[**New UI**](/docs/get-started/sumo-logic-ui). In the top menu select **Configuration**, and then under **Logs** select **Fields**. You can also click the **Go To...** menu at the top of the screen and select **Fields**. -2. Add Field ‎**x-github-event**‎.
Field_GitHub +Sumo Logic needs to understand the event type for incoming events. To enable this, the [x-github-event](https://docs.github.com/en/developers/webhooks-and-events/webhook-events-and-payloads) event type is automatically added to the [Fields](/docs/manage/fields) during installation of the app. ## Installing the GitHub App diff --git a/docs/integrations/app-development/gitlab.md b/docs/integrations/app-development/gitlab.md index 95ed6bbaa6..10d43c94b8 100644 --- a/docs/integrations/app-development/gitlab.md +++ b/docs/integrations/app-development/gitlab.md @@ -93,10 +93,8 @@ Refer to the [GitLab Webhooks documentation](https://docs.gitlab.com/ee/user/pro ### Step 3: Enable GitLab Event tagging at Sumo Logic -Sumo Logic needs to understand the event type for incoming events. To enable this, the [x-gitlab-event](https://docs.gitlab.com/ee/user/project/integrations/webhook_events.html#push-events) event type needs to be enabled. To enable this, perform the following steps in the Sumo Logic console: +Sumo Logic needs to understand the event type for incoming events. To enable this, the [x-gitlab-event](https://docs.gitlab.com/ee/user/project/integrations/webhook_events.html#push-events) event type is automatically added to the [Fields](/docs/manage/fields) during installation of the app. -1. [**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Logs > Fields**.
[**New UI**](/docs/get-started/sumo-logic-ui). In the top menu select **Configuration**, and then under **Logs** select **Fields**. You can also click the **Go To...** menu at the top of the screen and select **Fields**. -2. Add Field ‎**x-GitLab-event**‎. ## Installing the GitLab App From c64686265ad2a809a787a5b40577abb7daed025e Mon Sep 17 00:00:00 2001 From: John Pipkin Date: Thu, 30 Jan 2025 18:09:14 -0600 Subject: [PATCH 06/10] Updates from review --- .../containers-orchestration/activemq.md | 16 ++++++++-------- .../containers-orchestration/kafka.md | 17 +++++++++-------- .../containers-orchestration/rabbitmq.md | 14 ++++++-------- 3 files changed, 23 insertions(+), 24 deletions(-) diff --git a/docs/integrations/containers-orchestration/activemq.md b/docs/integrations/containers-orchestration/activemq.md index e5944f948d..f39699870a 100644 --- a/docs/integrations/containers-orchestration/activemq.md +++ b/docs/integrations/containers-orchestration/activemq.md @@ -55,7 +55,7 @@ Configuring log and metric collection for the ActiveMQ App includes the followin ### Step 1: Fields in Sumo Logic -Following [fields](https://help.sumologic.com/docs/manage/fields/) will always be created automatically as a part of app installation process: +The following [fields](/docs/manage/fields/) will always be created automatically as a part of the app installation process: * `pod_labels_component` * `pod_labels_environment` @@ -70,7 +70,7 @@ If you're using ActiveMQ in a non-Kubernetes environment, these additional field * `messaging_cluster` * `pod` -For information on setting up fields, see [Sumo Logic Fields](/docs/manage/fields). +For information on setting up fields, see [Fields](/docs/manage/fields). ### Step 2: Configure ActiveMQ Logs and Metrics Collection @@ -222,7 +222,7 @@ This section explains the steps to collect ActiveMQ logs from a Kubernetes envir messaging_system: "activemq" messaging_cluster: "activemq_on_k8s_CHANGE_ME" ``` - 2. Enter in values for the following parameters (marked in `CHANGE_ME` above): + 1. Enter in values for the following parameters (marked in `CHANGE_ME` above): * `environment`. This is the deployment environment where the ActiveMQ cluster identified by the value of **`servers`** resides. For example: dev, prod or qa. While this value is optional we highly recommend setting it. * `messaging_cluster`. Enter a name to identify this ActiveMQ cluster. This cluster name will be shown in the Sumo Logic dashboards. @@ -237,7 +237,7 @@ This section explains the steps to collect ActiveMQ logs from a Kubernetes envir * For all other parameters, see [this doc](/docs/send-data/collect-from-other-data-sources/collect-metrics-telegraf/install-telegraf#configuring-telegraf) for more parameters that can be configured in the Telegraf agent globally. 3. The Sumologic-Kubernetes-Collection will automatically capture the logs from stdout and will send the logs to Sumologic. For more information on deploying Sumologic-Kubernetes-Collection, please see [this page](/docs/integrations/containers-orchestration/kubernetes#collecting-metrics-and-logs-for-the-kubernetes-app). -2. **(Optional) Collecting ActiveMQ Logs from a Log File**. If your ActiveMQ chart/pod is writing its logs to log files, you can use a [sidecar](https://github.com/SumoLogic/tailing-sidecar/tree/main/operator) to send log files to standard out. To do this: +1. **(Optional) Collecting ActiveMQ Logs from a Log File**. If your ActiveMQ chart/pod is writing its logs to log files, you can use a [sidecar](https://github.com/SumoLogic/tailing-sidecar/tree/main/operator) to send log files to standard out. To do this: 1. Determine the location of the ActiveMQ log file on Kubernetes. This can be determined from the log4j.properties for your ActiveMQ cluster along with the mounts on the ActiveMQ pods. 2. Install the Sumo Logic [tailing sidecar operator](https://github.com/SumoLogic/tailing-sidecar/tree/main/operator#deploy-tailing-sidecar-operator). 3. Add the following annotation in addition to the existing annotations. @@ -250,13 +250,13 @@ This section explains the steps to collect ActiveMQ logs from a Kubernetes envir annotations: tailing-sidecar: sidecarconfig;data:/opt/activemq/data/activemq.log ``` - 4. Make sure that the ActiveMQ pods are running and annotations are applied by using the command: + 1. Make sure that the ActiveMQ pods are running and annotations are applied by using the command: ```bash kubectl describe pod ``` - 5. Sumo Logic Kubernetes collection will automatically start collecting logs from the pods having the annotations defined above. + 1. Sumo Logic Kubernetes collection will automatically start collecting logs from the pods having the annotations defined above. -3. **FER to normalize the fields in Kubernetes environments**. Labels created in Kubernetes environments automatically are prefixed with `pod_labels`. To normalize these for our app to work, we will have a Field Extraction Rule automatically created with named as **AppObservabilityMessagingActiveMQFER** +1. **FER to normalize the fields in Kubernetes environments**. Labels created in Kubernetes environments automatically are prefixed with `pod_labels`. To normalize these for our app to work, a Field Extraction Rule is automatically created named **AppObservabilityMessagingActiveMQFER** @@ -427,7 +427,7 @@ At this point, ActiveMQ logs should start flowing into Sumo Logic. import CreateMonitors from '../../reuse/apps/create-monitors.md'; - 10. There are limits to how many alerts can be enabled +There are limits to how many alerts can be enabled :::note permissions required To install these monitors, you need to have the [Manage Monitors role capability](/docs/manage/users-roles/roles/role-capabilities/#alerting). diff --git a/docs/integrations/containers-orchestration/kafka.md b/docs/integrations/containers-orchestration/kafka.md index c0282def67..462c8f86fc 100644 --- a/docs/integrations/containers-orchestration/kafka.md +++ b/docs/integrations/containers-orchestration/kafka.md @@ -69,7 +69,7 @@ This section provides instructions for configuring log and metric collection for ### Fields in Sumo Logic -Following [fields](https://help.sumologic.com/docs/manage/fields/) will always be created automatically as a part of app installation process: +The following [fields](https://help.sumologic.com/docs/manage/fields/) will always be created automatically as a part of the app installation process: * `pod_labels_component` * `pod_labels_environment` @@ -77,14 +77,14 @@ Following [fields](https://help.sumologic.com/docs/manage/fields/) will always b * `pod_labels_messaging_cluster` -If you're using Kafka in a non-Kubernetes environment, these additional fields will get created automatically as a part of app installation process: +If you're using Kafka in a non-Kubernetes environment, these additional fields will get created automatically as a part of the app installation process: * `component` * `environment` * `messaging_system` * `messaging_cluster` * `pod` -For information on setting up fields, see [Sumo Logic Fields](/docs/manage/fields). +For information on setting up fields, see [Fields](/docs/manage/fields). ### Configure Collection for Kafka @@ -216,7 +216,7 @@ This section explains the steps to collect Kafka logs from a Kubernetes environm kubectl describe pod ``` 5. Sumo Logic Kubernetes collection will automatically start collecting logs from the pods having the annotations defined above. -3. **FER to normalize the fields in Kubernetes environments**. Labels created in Kubernetes environments automatically are prefixed with `pod_labels`. To normalize these for our app to work, we will have a Field Extraction Rule automatically created with named as **AppObservabilityMessagingKafkaFER** +3. **FER to normalize the fields in Kubernetes environments**. Labels created in Kubernetes environments automatically are prefixed with `pod_labels`. To normalize these for our app to work, a Field Extraction Rule is automatically created named **AppObservabilityMessagingKafkaFER** @@ -602,12 +602,13 @@ Use this dashboard to: ## Kafka Alerts -#### Pre-Packaged Alerts +#### Pre-packaged alerts -Sumo Logic has provided out-of-the-box alerts available through [Sumo Logic monitors](/docs/alerts/monitors) to help you quickly determine if the Kafka cluster is available and performing as expected. These alerts are built based on metrics datasets and have preset thresholds based on industry best practices and recommendations. +Sumo Logic has provided out-of-the-box alerts available through [monitors](/docs/alerts/monitors) to help you quickly determine if the Kafka cluster is available and performing as expected. These alerts are built based on metrics datasets and have preset thresholds based on industry best practices and recommendations. - -* There are limits to how many alerts can be enabled - see the [Alerts FAQ](/docs/alerts/monitors/monitor-faq) for details. +:::note +There are limits to how many alerts can be enabled. See [Monitors FAQ](/docs/alerts/monitors/monitor-faq) for details. +::: :::note permissions required To install these alerts, you need to have the [Manage Monitors role capability](/docs/manage/users-roles/roles/role-capabilities/#alerting). ::: diff --git a/docs/integrations/containers-orchestration/rabbitmq.md b/docs/integrations/containers-orchestration/rabbitmq.md index eda1a01122..0c4e332a67 100644 --- a/docs/integrations/containers-orchestration/rabbitmq.md +++ b/docs/integrations/containers-orchestration/rabbitmq.md @@ -53,15 +53,14 @@ This section provides instructions for configuring log and metric collection for ### Step 1: Fields in Sumo Logic -Following [fields](https://help.sumologic.com/docs/manage/fields/) will always be created automatically as a part of app installation process: +The following [fields](https://help.sumologic.com/docs/manage/fields/) will always be created automatically as a part of the app installation process: * `pod_labels_component` * `pod_labels_environment` * `pod_labels_messaging_system` * `pod_labels_messaging_cluster` - -If you're using RabbitMQ in a non-Kubernetes environment, these additional fields will get created automatically as a part of app installation process: +If you're using RabbitMQ in a non-Kubernetes environment, these additional fields will get created automatically as a part of the app installation process: * `component` * `environment` * `messaging_system` @@ -70,9 +69,6 @@ If you're using RabbitMQ in a non-Kubernetes environment, these additional field For information on setting up fields, see [Sumo Logic Fields](/docs/manage/fields). - - - ### Step 2: Configure Collection for RabbitMQ Sumo Logic supports collection of logs and metrics data from RabbitMQ in both Kubernetes and non-Kubernetes environments. @@ -199,7 +195,7 @@ For all other parameters see [this doc](/docs/send-data/collect-from-other-data- kubectl describe pod ``` 5. Sumo Logic Kubernetes collection will automatically start collecting logs from the pods having the annotations defined above. -3. **FER to normalize the fields in Kubernetes environments**. Labels created in Kubernetes environments automatically are prefixed with `pod_labels`. To normalize these for our app to work, we will have a Field Extraction Rule automatically created with named as **AppObservabilityMessagingRabbitMQFER** +3. **FER to normalize the fields in Kubernetes environments**. Labels created in Kubernetes environments automatically are prefixed with `pod_labels`. To normalize these for our app to work, a Field Extraction Rule is automatically created named **AppObservabilityMessagingRabbitMQFER** @@ -335,7 +331,9 @@ At this point, RabbitMQ logs should start flowing into Sumo Logic. import CreateMonitors from '../../reuse/apps/create-monitors.md'; - 10. There are limits to how many alerts can be enabled + +There are limits to how many alerts can be enabled. + :::note permissions required To install these monitors, you need to have the [Manage Monitors role capability](/docs/manage/users-roles/roles/role-capabilities/#alerting). ::: From 65d5b38cc6bd4835e63ea4c0b034b8da9dfec7f8 Mon Sep 17 00:00:00 2001 From: Apoorv Kudesia Date: Mon, 3 Feb 2025 19:09:07 +0530 Subject: [PATCH 07/10] SUMO-251067 | Apoorv | Add. PR review comments --- .../integrations/app-development/bitbucket.md | 6 +---- docs/integrations/app-development/github.md | 3 +-- docs/integrations/app-development/gitlab.md | 4 +--- .../containers-orchestration/activemq.md | 22 +++++++++---------- .../containers-orchestration/kafka.md | 20 ++++++++--------- .../containers-orchestration/rabbitmq.md | 22 +++++++++---------- 6 files changed, 32 insertions(+), 45 deletions(-) diff --git a/docs/integrations/app-development/bitbucket.md b/docs/integrations/app-development/bitbucket.md index 7641bdca57..671c98c78a 100644 --- a/docs/integrations/app-development/bitbucket.md +++ b/docs/integrations/app-development/bitbucket.md @@ -125,7 +125,6 @@ In this step, you configure a Hosted Collector to receive Webhook Events from Bi 8. **Triggers** - Click on Choose from a full list of triggers, and choose all triggers under Repository, Issue and Pull Request. 9. Click **Save** - ### Step 3: Configure the Bitbucket CI/CD Pipeline to Collect Deploy Events A Bitbucket pipe needs to be configured to send code deploy status to Sumo Logic. Add the following pipe code to the step section of your deployment part of the `bitbucket-pipelines.yml` file. Replace `SUMOLOGIC_HTTP_URL` with HTTP Source URL configured in Step 1. @@ -141,12 +140,9 @@ If you want to deployment events to multiple Sumo Logic orgs, include a `-pipe` For reference: This is how the [bitbucket-pipelines.yml](https://bitbucket.org/app-dev-sumo/backendservice/src/master/bitbucket-pipelines.yml) looks after adding deploy pipe code to our sample Bitbucket CI/CD pipeline. - ### Step 4: Enable Bitbucket Event-Key tagging at Sumo Logic -Sumo Logic needs to understand the event type for incoming events (for example, repo:push events). To enable this, the [X-Event-Key](https://confluence.atlassian.com/bitbucket/event-payloads-740262817.html#EventPayloads-HTTPheaders) event type is automatically added to the [Fields](/docs/manage/fields) during installation of the app. - - +To properly identify the event type for incoming events (for example, repo:push events), Sumo Logic automatically adds the [X-Event-Key](https://confluence.atlassian.com/bitbucket/event-payloads-740262817.html#EventPayloads-HTTPheaders) event type to the [Fields](/docs/manage/fields) during app installation. ## Installing the Bitbucket App diff --git a/docs/integrations/app-development/github.md b/docs/integrations/app-development/github.md index d7df402cf4..bca11d4538 100644 --- a/docs/integrations/app-development/github.md +++ b/docs/integrations/app-development/github.md @@ -158,8 +158,7 @@ To configure a GitHub Webhook: ### Enable GitHub Event tagging at Sumo Logic -Sumo Logic needs to understand the event type for incoming events. To enable this, the [x-github-event](https://docs.github.com/en/developers/webhooks-and-events/webhook-events-and-payloads) event type is automatically added to the [Fields](/docs/manage/fields) during installation of the app. - +To properly identify the event type for incoming events, Sumo Logic automatically adds the [x-github-event](https://docs.github.com/en/developers/webhooks-and-events/webhook-events-and-payloads) event type to the [Fields](/docs/manage/fields) during app installation. ## Installing the GitHub App diff --git a/docs/integrations/app-development/gitlab.md b/docs/integrations/app-development/gitlab.md index 10d43c94b8..5af4877e8c 100644 --- a/docs/integrations/app-development/gitlab.md +++ b/docs/integrations/app-development/gitlab.md @@ -93,9 +93,7 @@ Refer to the [GitLab Webhooks documentation](https://docs.gitlab.com/ee/user/pro ### Step 3: Enable GitLab Event tagging at Sumo Logic -Sumo Logic needs to understand the event type for incoming events. To enable this, the [x-gitlab-event](https://docs.gitlab.com/ee/user/project/integrations/webhook_events.html#push-events) event type is automatically added to the [Fields](/docs/manage/fields) during installation of the app. - - +To properly identify the event type for incoming events, Sumo Logic automatically adds the [x-gitlab-event](https://docs.gitlab.com/ee/user/project/integrations/webhook_events.html#push-events) event type to the [Fields](/docs/manage/fields) during app installation. ## Installing the GitLab App diff --git a/docs/integrations/containers-orchestration/activemq.md b/docs/integrations/containers-orchestration/activemq.md index f39699870a..0601317d5e 100644 --- a/docs/integrations/containers-orchestration/activemq.md +++ b/docs/integrations/containers-orchestration/activemq.md @@ -51,25 +51,23 @@ Host: broker-3-activemq Name: /opt/activemq/data/activemq.log Category:logfile This App has been tested with following ActiveMQ versions: * 5.16.2. -Configuring log and metric collection for the ActiveMQ App includes the following tasks: +### Step 1: Configure fields in Sumo Logic -### Step 1: Fields in Sumo Logic +Configuring log and metric collection for the ActiveMQ App includes the following tasks: The following [fields](/docs/manage/fields/) will always be created automatically as a part of the app installation process: - -* `pod_labels_component` -* `pod_labels_environment` -* `pod_labels_messaging_system` -* `pod_labels_messaging_cluster` - - -If you're using ActiveMQ in a non-Kubernetes environment, these additional fields will get created automatically as a part of app installation process: * `component` * `environment` * `messaging_system` * `messaging_cluster` * `pod` +If you're using ActiveMQ in a Kubernetes environment, the following additional fields will be automatically created as a part of the app installation process: +* `pod_labels_component` +* `pod_labels_environment` +* `pod_labels_messaging_system` +* `pod_labels_messaging_cluster` + For information on setting up fields, see [Fields](/docs/manage/fields). @@ -250,7 +248,7 @@ This section explains the steps to collect ActiveMQ logs from a Kubernetes envir annotations: tailing-sidecar: sidecarconfig;data:/opt/activemq/data/activemq.log ``` - 1. Make sure that the ActiveMQ pods are running and annotations are applied by using the command: + 1. Ensure that the ActiveMQ pods are running and annotations are applied by using the command: ```bash kubectl describe pod ``` @@ -427,7 +425,7 @@ At this point, ActiveMQ logs should start flowing into Sumo Logic. import CreateMonitors from '../../reuse/apps/create-monitors.md'; -There are limits to how many alerts can be enabled +There are limits to how many alerts can be enabled. :::note permissions required To install these monitors, you need to have the [Manage Monitors role capability](/docs/manage/users-roles/roles/role-capabilities/#alerting). diff --git a/docs/integrations/containers-orchestration/kafka.md b/docs/integrations/containers-orchestration/kafka.md index 462c8f86fc..a1c7451587 100644 --- a/docs/integrations/containers-orchestration/kafka.md +++ b/docs/integrations/containers-orchestration/kafka.md @@ -67,23 +67,21 @@ messaging_cluster=* messaging_system="kafka" \ This section provides instructions for configuring log and metric collection for the Sumo Logic App for Kafka. -### Fields in Sumo Logic +### Step 1: Configure fields in Sumo Logic -The following [fields](https://help.sumologic.com/docs/manage/fields/) will always be created automatically as a part of the app installation process: - -* `pod_labels_component` -* `pod_labels_environment` -* `pod_labels_messaging_system` -* `pod_labels_messaging_cluster` - - -If you're using Kafka in a non-Kubernetes environment, these additional fields will get created automatically as a part of the app installation process: +The following [fields](/docs/manage/fields/) will always be created automatically as a part of the app installation process: * `component` * `environment` * `messaging_system` * `messaging_cluster` * `pod` +If you're using Kafka in a Kubernetes environment, the following additional fields will be automatically created as a part of the app installation process: +* `pod_labels_component` +* `pod_labels_environment` +* `pod_labels_messaging_system` +* `pod_labels_messaging_cluster` + For information on setting up fields, see [Fields](/docs/manage/fields). ### Configure Collection for Kafka @@ -216,7 +214,7 @@ This section explains the steps to collect Kafka logs from a Kubernetes environm kubectl describe pod ``` 5. Sumo Logic Kubernetes collection will automatically start collecting logs from the pods having the annotations defined above. -3. **FER to normalize the fields in Kubernetes environments**. Labels created in Kubernetes environments automatically are prefixed with `pod_labels`. To normalize these for our app to work, a Field Extraction Rule is automatically created named **AppObservabilityMessagingKafkaFER** +3. **FER to normalize the fields in Kubernetes environments**. Labels created in Kubernetes environments automatically are prefixed with `pod_labels`. To normalize these for our app to work, a Field Extraction Rule named **AppObservabilityMessagingKafkaFER** is automatically created. diff --git a/docs/integrations/containers-orchestration/rabbitmq.md b/docs/integrations/containers-orchestration/rabbitmq.md index 0c4e332a67..d94aa8787c 100644 --- a/docs/integrations/containers-orchestration/rabbitmq.md +++ b/docs/integrations/containers-orchestration/rabbitmq.md @@ -50,24 +50,22 @@ Host: broker-1 Name: /var/log/rabbitmq/rabbit.log Category: logfile This section provides instructions for configuring log and metric collection for the Sumo Logic App for RabbitMQ. +### Step 1: Configure fields in Sumo Logic -### Step 1: Fields in Sumo Logic - -The following [fields](https://help.sumologic.com/docs/manage/fields/) will always be created automatically as a part of the app installation process: - -* `pod_labels_component` -* `pod_labels_environment` -* `pod_labels_messaging_system` -* `pod_labels_messaging_cluster` - -If you're using RabbitMQ in a non-Kubernetes environment, these additional fields will get created automatically as a part of the app installation process: +The following [fields](/docs/manage/fields/) will always be created automatically as a part of the app installation process: * `component` * `environment` * `messaging_system` * `messaging_cluster` * `pod` -For information on setting up fields, see [Sumo Logic Fields](/docs/manage/fields). +If you're using RabbitMQ in a Kubernetes environment, the following additional fields will be automatically created as a part of the app installation process: +* `pod_labels_component` +* `pod_labels_environment` +* `pod_labels_messaging_system` +* `pod_labels_messaging_cluster` + +For information on setting up fields, see [Fields](/docs/manage/fields). ### Step 2: Configure Collection for RabbitMQ @@ -195,7 +193,7 @@ For all other parameters see [this doc](/docs/send-data/collect-from-other-data- kubectl describe pod ``` 5. Sumo Logic Kubernetes collection will automatically start collecting logs from the pods having the annotations defined above. -3. **FER to normalize the fields in Kubernetes environments**. Labels created in Kubernetes environments automatically are prefixed with `pod_labels`. To normalize these for our app to work, a Field Extraction Rule is automatically created named **AppObservabilityMessagingRabbitMQFER** +3. **FER to normalize the fields in Kubernetes environments**. Labels created in Kubernetes environments automatically are prefixed with `pod_labels`. To normalize these for our app to work, a Field Extraction Rule named **AppObservabilityMessagingRabbitMQFER** is automatically created. From c8a7798310fb5d31bf886b4c40ebe2b2d8a5e6f2 Mon Sep 17 00:00:00 2001 From: Apoorv Kudesia Date: Wed, 2 Apr 2025 14:21:32 +0530 Subject: [PATCH 08/10] SUMO-254673 | Apoorv | Revert changes for container apps --- .../containers-orchestration/activemq.md | 318 ++++++++++++++++-- .../containers-orchestration/kafka.md | 157 +++++++-- .../containers-orchestration/rabbitmq.md | 160 +++++++-- 3 files changed, 561 insertions(+), 74 deletions(-) diff --git a/docs/integrations/containers-orchestration/activemq.md b/docs/integrations/containers-orchestration/activemq.md index 0601317d5e..9e81898cc8 100644 --- a/docs/integrations/containers-orchestration/activemq.md +++ b/docs/integrations/containers-orchestration/activemq.md @@ -51,24 +51,40 @@ Host: broker-3-activemq Name: /opt/activemq/data/activemq.log Category:logfile This App has been tested with following ActiveMQ versions: * 5.16.2. -### Step 1: Configure fields in Sumo Logic - Configuring log and metric collection for the ActiveMQ App includes the following tasks: -The following [fields](/docs/manage/fields/) will always be created automatically as a part of the app installation process: -* `component` -* `environment` -* `messaging_system` -* `messaging_cluster` -* `pod` +### Step 1: Configure Fields in Sumo Logic -If you're using ActiveMQ in a Kubernetes environment, the following additional fields will be automatically created as a part of the app installation process: +Create the following Fields in Sumo Logic prior to configuring collection. This ensures that your logs and metrics are tagged with relevant metadata, which is required by the app dashboards. For information on setting up fields, see [Sumo Logic Fields](/docs/manage/fields). + + + + + +If you're using ActiveMQ in a Kubernetes environment, create the fields: * `pod_labels_component` * `pod_labels_environment` * `pod_labels_messaging_system` * `pod_labels_messaging_cluster` -For information on setting up fields, see [Fields](/docs/manage/fields). + + + +If you're using ActiveMQ in a non-Kubernetes environment, create the fields: +* `component` +* `environment` +* `messaging_system` +* `messaging_cluster` +* `pod` + + + ### Step 2: Configure ActiveMQ Logs and Metrics Collection @@ -220,7 +236,7 @@ This section explains the steps to collect ActiveMQ logs from a Kubernetes envir messaging_system: "activemq" messaging_cluster: "activemq_on_k8s_CHANGE_ME" ``` - 1. Enter in values for the following parameters (marked in `CHANGE_ME` above): + 2. Enter in values for the following parameters (marked in `CHANGE_ME` above): * `environment`. This is the deployment environment where the ActiveMQ cluster identified by the value of **`servers`** resides. For example: dev, prod or qa. While this value is optional we highly recommend setting it. * `messaging_cluster`. Enter a name to identify this ActiveMQ cluster. This cluster name will be shown in the Sumo Logic dashboards. @@ -235,7 +251,7 @@ This section explains the steps to collect ActiveMQ logs from a Kubernetes envir * For all other parameters, see [this doc](/docs/send-data/collect-from-other-data-sources/collect-metrics-telegraf/install-telegraf#configuring-telegraf) for more parameters that can be configured in the Telegraf agent globally. 3. The Sumologic-Kubernetes-Collection will automatically capture the logs from stdout and will send the logs to Sumologic. For more information on deploying Sumologic-Kubernetes-Collection, please see [this page](/docs/integrations/containers-orchestration/kubernetes#collecting-metrics-and-logs-for-the-kubernetes-app). -1. **(Optional) Collecting ActiveMQ Logs from a Log File**. If your ActiveMQ chart/pod is writing its logs to log files, you can use a [sidecar](https://github.com/SumoLogic/tailing-sidecar/tree/main/operator) to send log files to standard out. To do this: +2. **(Optional) Collecting ActiveMQ Logs from a Log File**. If your ActiveMQ chart/pod is writing its logs to log files, you can use a [sidecar](https://github.com/SumoLogic/tailing-sidecar/tree/main/operator) to send log files to standard out. To do this: 1. Determine the location of the ActiveMQ log file on Kubernetes. This can be determined from the log4j.properties for your ActiveMQ cluster along with the mounts on the ActiveMQ pods. 2. Install the Sumo Logic [tailing sidecar operator](https://github.com/SumoLogic/tailing-sidecar/tree/main/operator#deploy-tailing-sidecar-operator). 3. Add the following annotation in addition to the existing annotations. @@ -248,13 +264,32 @@ This section explains the steps to collect ActiveMQ logs from a Kubernetes envir annotations: tailing-sidecar: sidecarconfig;data:/opt/activemq/data/activemq.log ``` - 1. Ensure that the ActiveMQ pods are running and annotations are applied by using the command: + 4. Make sure that the ActiveMQ pods are running and annotations are applied by using the command: ```bash kubectl describe pod ``` - 1. Sumo Logic Kubernetes collection will automatically start collecting logs from the pods having the annotations defined above. + 5. Sumo Logic Kubernetes collection will automatically start collecting logs from the pods having the annotations defined above. + +3. **Add an FER to normalize the fields in Kubernetes environments**. Labels created in Kubernetes environments automatically are prefixed with `pod_labels`. To normalize these for our app to work, we need to create a Field Extraction Rule if not already created for Messaging Application Components. To do so: + 1. [**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Logs > Field Extraction Rules**.
[**New UI**](/docs/get-started/sumo-logic-ui). In the top menu select **Configuration**, and then under **Logs** select **Field Extraction Rules**. You can also click the **Go To...** menu at the top of the screen and select **Field Extraction Rules**. + 2. Click the + Add button on the top right of the table. + 3. The **Add Field Extraction Rule** form will appear. Enter the following options: + * **Rule Name**. Enter the name as **App Observability - Messaging**. + * **Applied At.** Choose **Ingest Time** + * **Scope**. Select **Specific Data** + * **Scope**: Enter the following keyword search expression: + ```sql + pod_labels_environment=* pod_labels_component=messaging + pod_labels_messaging_system=* pod_labels_messaging_cluster=* + ``` + * **Parse Expression**. Enter the following parse expression: + ```sql + if (!isEmpty(pod_labels_environment), pod_labels_environment, "") as environment + | pod_labels_component as component + | pod_labels_messaging_system as messaging_system + | pod_labels_messaging_cluster as messaging_cluster + ``` -1. **FER to normalize the fields in Kubernetes environments**. Labels created in Kubernetes environments automatically are prefixed with `pod_labels`. To normalize these for our app to work, a Field Extraction Rule is automatically created named **AppObservabilityMessagingActiveMQFER**
@@ -420,28 +455,247 @@ At this point, ActiveMQ logs should start flowing into Sumo Logic. -## ActiveMQ Monitors +## Installing ActiveMQ Monitors -import CreateMonitors from '../../reuse/apps/create-monitors.md'; +This section and below contain instructions for installing Sumo Logic Monitors for ActiveMQ, the app, and descriptions of each of the app dashboards. These instructions assume you have already set up the collection as described in [Collect Logs and Metrics for the ActiveMQ](#collecting-logs-and-metrics-for-activemq). - -There are limits to how many alerts can be enabled. +* To install these alerts, you need to have the Manage Monitors role capability. +* Alerts can be installed by either importing a JSON file or a Terraform script. -:::note permissions required -To install these monitors, you need to have the [Manage Monitors role capability](/docs/manage/users-roles/roles/role-capabilities/#alerting). +Sumo Logic provides out-of-the-box alerts available through [Sumo Logic monitors](/docs/alerts/monitors) to help you monitor your ActiveMQ clusters. These alerts are built based on metrics and logs datasets and include preset thresholds based on industry best practices and recommendations. For details, see [ActiveMQ Alerts](#activemq-alerts). + +:::note +There are limits to how many alerts can be enabled - please see the[ Alerts FAQ](/docs/alerts/monitors/monitor-faq) for details. ::: -### ActiveMQ alerts - -| Alert Name | Alert Description and conditions | Alert Condition | Recover Condition | -|:--|:--|:--|:--| -| `ActiveMQ - High CPU Usage Alert` | This alert gets triggered when CPU usage on a node in a ActiveMQ cluster is high. | Count >= 80 | Count < 80 | -| `ActiveMQ - High Memory Usage Alert` | This alert gets triggered when memory usage on a node in a ActiveMQ cluster is high. | Count >= 80 | Count < 80 | -| `ActiveMQ - High Storage Used Alert` | This alert gets triggered when there is high store usage on a node in a ActiveMQ cluster. | Count >= 80 | Count < 80 | -| `ActiveMQ - Maximum Connection Alert` | This alert gets triggered when one node in ActiveMQ cluster exceeds the maximum allowed client connection limit. | Count >= 1 | Count < 1 | -| `ActiveMQ - No Consumers on Queues Alert` | This alert gets triggered when a ActiveMQ queue has no consumers. | Count < 1 | Count >= 1 | -| `ActiveMQ - Node Down Alert` | This alert gets triggered when a node in the ActiveMQ cluster is down. | Count >= 1 | Count < 1 | -| `ActiveMQ - Too Many Connections Alert` | This alert gets triggered when there are too many connections to a node in a ActiveMQ cluster. | Count >= 1000 | Count < 1000 | +### Method 1: Install the monitors by importing a JSON file: + +1. Download the[ JSON file](https://github.com/SumoLogic/terraform-sumologic-sumo-logic-monitor/blob/main/monitor_packages/ActiveMQ/activemq.json) that describes the monitors. +2. The[ JSON](https://github.com/SumoLogic/terraform-sumologic-sumo-logic-monitor/blob/main/monitor_packages/ActiveMQ/activemq.json) contains the alerts that are based on Sumo Logic searches that do not have any scope filters and therefore will be applicable to all ActiveMQ clusters, the data for which has been collected via the instructions in the previous sections. However, if you would like to restrict these alerts to specific clusters or environments, update the JSON file by replacing the text `messaging_system=activemq` with ``. Custom filter examples: + * For alerts applicable only to a specific cluster, your custom filter would be: `messaging_cluster=activemq-prod.01` + * For alerts applicable to all clusters that start with `activemq-prod`: `messaging_cluster=activemq-prod*` + * For alerts applicable to a specific cluster within a production environment: `messaging_cluster=activemq-1` and `environment=prod`. This assumes you have set the optional environment tag while configuring collection. +3. Go to Manage Data > Alerts > Monitors. +4. Click **Add**. +5. Click Import and then copy-paste the above JSON to import monitors. + +The monitors are disabled by default. Once you have installed the alerts using this method, navigate to the ActiveMQ folder under **Monitors** to configure them. See[ this](/docs/alerts/monitors) document to enable monitors to send notifications to teams or connections. Please see the instructions detailed in Step 4 of this [document](/docs/alerts/monitors/create-monitor). + + +### Method 2: Install the alerts using a Terraform script + +1. Generate an access key and access ID for a user that has the Manage Monitors role capability in Sumo Logic using instructions in [Access Keys](/docs/manage/security/access-keys). To find out which deployment your Sumo Logic account is in, see [Sumo Logic endpoints](/docs/api/getting-started#sumo-logic-endpoints-by-deployment-and-firewall-security). +2. [Download and install Terraform 0.13](https://www.terraform.io/downloads.html) or later. +3. Download the Sumo Logic Terraform package for ActiveMQ alerts: The alerts package is available in the Sumo Logic github[ repository](https://github.com/SumoLogic/terraform-sumologic-sumo-logic-monitor/tree/main/monitor_packages/ActiveMQ). You can either download it through the “git clone” command or as a zip file. +4. Alert Configuration: After the package has been extracted, navigate to the package directory `terraform-sumologic-sumo-logic-monitor/monitor_packages/ActiveMQ/`. + 1. Edit the `activemq.auto.tfvars` file and add the Sumo Logic Access Key, Access Id, and Deployment from Step 1. + ```bash + access_id = "" + access_key = "" + environment = "" + ``` + The Terraform script installs the alerts without any scope filters, if you would like to restrict the alerts to specific clusters or environments, update the variable `'activemq_data_source'`. Custom filter examples: + * A specific cluster `'messaging_cluster=activemq.prod.01'` + * All clusters in an environment `'environment=prod'` + * For alerts applicable to all clusters that start with activemq-prod, your custom filter would be: `'messaging_cluster=activemq-prod*'` + * For alerts applicable to a specific cluster within a production environment, your custom filter would be:`activemq_cluster=activemq-1` and `environment=prod` (This assumes you have set the optional environment tag while configuring collection) + + All monitors are disabled by default on installation, if you would like to enable all the monitors, set the parameter monitors_disabled to false in this file. + + By default, the monitors are configured in a monitor **folder** called “**ActiveMQ**”, if you would like to change the name of the folder, update the monitor folder name in “folder” key at **activemq.auto.tfvars** file. + +5. If you would like the alerts to send email or connection notifications, modify the file **activemq_notifications.auto.tfvars** and populate `connection_notifications` and `email_notifications` as per below examples. +```bash title="Pagerduty Connection Example" +connection_notifications = [ + { + connection_type = "PagerDuty", + connection_id = "", + payload_override = "{\"service_key\": \"your_pagerduty_api_integration_key\",\"event_type\": \"trigger\",\"description\": \"Alert: Triggered {{TriggerType}} for Monitor {{Name}}\",\"client\": \"Sumo Logic\",\"client_url\": \"{{QueryUrl}}\"}", + run_for_trigger_types = ["Critical", "ResolvedCritical"] + }, + { + connection_type = "Webhook", + connection_id = "", + payload_override = "", + run_for_trigger_types = ["Critical", "ResolvedCritical"] + } + ] +``` + +Replace `` with the connection id of the webhook connection. The webhook connection id can be retrieved by calling the[ Monitors API](https://api.sumologic.com/docs/#operation/listConnections). + +For overriding payload for different connection types, see [Set Up Webhook Connections](/docs/alerts/webhook-connections/set-up-webhook-connections). + +```bash title="Email Notifications Example" +email_notifications = [ + { + connection_type = "Email", + recipients = ["abc@example.com"], + subject = "Monitor Alert: {{TriggerType}} on {{Name}}", + time_zone = "PST", + message_body = "Triggered {{TriggerType}} Alert on {{Name}}: {{QueryURL}}", + run_for_trigger_types = ["Critical", "ResolvedCritical"] + } + ] +``` + +6. Install the Alerts: + 1. Navigate to the package directory `terraform-sumologic-sumo-logic-monitor/monitor_packages/ActiveMQ/` and run `terraform init`. This will initialize Terraform and will download the required components. + 2. Run `terraform plan` to view the monitors which will be created/modified by Terraform. + 3. Run `terraform apply`. +7. Post Installation: If you haven’t enabled alerts and/or configured notifications through the Terraform procedure outlined above, we highly recommend enabling alerts of interest and configuring each enabled alert to send notifications to other users or services. This is detailed in Step 4 of [this document](/docs/alerts/monitors/create-monitor). + +There are limits to how many alerts can be enabled. See the [Alerts FAQ](/docs/alerts/monitors/monitor-faq). + + +## Installing the ActiveMQ App + +Locate and install the app you need from the **App Catalog**. If you want to see a preview of the dashboards included with the app before installing, click **Preview Dashboards**. + +1. From the **App Catalog**, search for and select the app. +2. Select the version of the service you're using and click **Add to Library**. +3. To install the app, complete the following fields. + 1. **App Name.** You can retain the existing name, or enter a name of your choice for the app. + 2. **Data Source.** Choose **Enter a Custom Data Filter** and enter a custom ActiveMQ cluster filter. Examples: + * For all ActiveMQ clusters: `messaging_cluster=*` + * For a specific cluster: `messaging_cluster=activemq.dev.01`. + * Clusters within a specific environment: `messaging_cluster=activemq-1` and `environment=prod` (This assumes you have set the optional environment tag while configuring collection). +4. **Advanced**. Select the **Location in Library** (the default is the Personal folder in the library), or click **New Folder** to add a new folder. +5. Click **Add to Library**. + +Once an app is installed, it will appear in your **Personal** folder, or another folder that you specified. From here, you can share it with your organization. + +Panels will start to fill automatically. It's important to note that each panel slowly fills with data matching the time range query and received since the panel was created. Results won't immediately be available, but with a bit of time, you'll see full graphs and maps. + + +## ActiveMQ Alerts + +Sumo Logic has provided out-of-the-box alerts available via[ Sumo Logic monitors](/docs/alerts/monitors) to help you quickly determine if the ActiveMQ database cluster is available and performing as expected. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Alert Type (Metrics/Logs) Alert Name Alert Description Trigger Type (Critical / Warning) Alert Condition Recover Condition
Metrics ActiveMQ - High CPU Usage This alert fires when CPU usage on a node in a ActiveMQ cluster is high. Critical > = 80 < 80
Metrics ActiveMQ - High Host Disk Usage This alert fires when there is high disk usage on a node in an ActiveMQ cluster. Critical > = 80 < 80
Metrics ActiveMQ - High Memory Usage This alert fires when memory usage on a node in an ActiveMQ cluster is high. Critical > = 80 < 80
Metrics ActiveMQ - High Number of File Descriptors in use. This alert fires when the percentage of file descriptors used by a node in an ActiveMQ cluster is high. Critical > = 80 < 80
Metrics ActiveMQ - High Storage Used This alert fires when there is storage usage on a node that is high in an ActiveMQ cluster. Critical > = 80 < 80
Metrics ActiveMQ - High Temp Usage This alert fires when there is high temp usage on a node in an ActiveMQ cluster. Critical > = 80 < 80
Logs ActiveMQ - Maximum Connection This alert fires when one node in ActiveMQ cluster exceeds the maximum allowed client connection limit. Critical > = 1 < 1
Metrics ActiveMQ - No Consumers on Queues This alert fires when an ActiveMQ queue has no consumers. Critical < 1 > = 1
Metrics ActiveMQ - No Consumers on Topics This alert fires when an ActiveMQ topic has no consumers. Critical < 1 > = 1
Logs ActiveMQ - Node Down This alert fires when a node in the ActiveMQ cluster is down. Critical > = 1 < 1
Metrics ActiveMQ - Too Many Connections This alert fires when there are too many connections to a node in an ActiveMQ cluster. Critical > = 1000 < 1000
Metrics ActiveMQ - Too Many Expired Messages on Queues This alert fires when there are too many expired messages on a queue in an ActiveMQ cluster. Critical > = 1000 < 1000
Metrics ActiveMQ - Too Many Expired Messages on Topics This alert fires when there are too many expired messages on a topic in an ActiveMQ cluster. Critical > = 1000 < 1000
Metrics ActiveMQ - Too Many Unacknowledged Messages This alert fires when there are too many unacknowledged messages on a node in an ActiveMQ cluster. Critical > = 1000 < 1000
+ + ## Viewing the ActiveMQ Dashboards diff --git a/docs/integrations/containers-orchestration/kafka.md b/docs/integrations/containers-orchestration/kafka.md index a1c7451587..49aa1b000c 100644 --- a/docs/integrations/containers-orchestration/kafka.md +++ b/docs/integrations/containers-orchestration/kafka.md @@ -67,22 +67,38 @@ messaging_cluster=* messaging_system="kafka" \ This section provides instructions for configuring log and metric collection for the Sumo Logic App for Kafka. -### Step 1: Configure fields in Sumo Logic +### Configure Fields in Sumo Logic -The following [fields](/docs/manage/fields/) will always be created automatically as a part of the app installation process: -* `component` -* `environment` -* `messaging_system` -* `messaging_cluster` -* `pod` +Create the following Fields in Sumo Logic prior to configuring collection. This ensures that your logs and metrics are tagged with relevant metadata, which is required by the app dashboards. For information on setting up fields, see [Sumo Logic Fields](/docs/manage/fields). + + + + -If you're using Kafka in a Kubernetes environment, the following additional fields will be automatically created as a part of the app installation process: +If you're using Kafka in a Kubernetes environment, create the fields: * `pod_labels_component` * `pod_labels_environment` * `pod_labels_messaging_system` * `pod_labels_messaging_cluster` -For information on setting up fields, see [Fields](/docs/manage/fields). + + + +If you're using Kafka in a non-Kubernetes environment, create the fields: +* `component` +* `environment` +* `messaging_system` +* `messaging_cluster` +* `pod` + + + ### Configure Collection for Kafka @@ -214,7 +230,30 @@ This section explains the steps to collect Kafka logs from a Kubernetes environm kubectl describe pod ``` 5. Sumo Logic Kubernetes collection will automatically start collecting logs from the pods having the annotations defined above. -3. **FER to normalize the fields in Kubernetes environments**. Labels created in Kubernetes environments automatically are prefixed with `pod_labels`. To normalize these for our app to work, a Field Extraction Rule named **AppObservabilityMessagingKafkaFER** is automatically created. +3. **Add an FER to normalize the fields in Kubernetes environments**. Labels created in Kubernetes environments automatically are prefixed with `pod_labels`. To normalize these for our app to work, we need to create a Field Extraction Rule if not already created for Messaging Application Components. To do so: + 1. [**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Logs > Field Extraction Rules**.
[**New UI**](/docs/get-started/sumo-logic-ui). In the top menu select **Configuration**, and then under **Logs** select **Field Extraction Rules**. You can also click the **Go To...** menu at the top of the screen and select **Field Extraction Rules**. + 2. Click the **+ Add** button on the top right of the table. + 3. The **Add Field Extraction Rule** form will appear. Enter the following options: + * **Rule Name**. Enter the name as **App Component Observability - Messaging.** + * **Applied At**. Choose Ingest Time + * **Scope**. Select Specific Data + * Scope: Enter the following keyword search expression: + ```sql + pod_labels_environment=* pod_labels_component=messaging + pod_labels_messaging_system=kafka pod_labels_messaging_cluster=* + ``` + * **Parse Expression**. Enter the following parse expression: + ```sql + if (!isEmpty(pod_labels_environment), pod_labels_environment, "") as environment + | pod_labels_component as component + | pod_labels_messaging_system as messaging_system + | pod_labels_messaging_cluster as messaging_cluster + ``` + 4. Click **Save** to create the rule. + 5. Verify logs are flowing into Sumo Logic by running the following logs query: + ```sql + component="messaging" and messaging_system="kafka" + ```
@@ -351,6 +390,93 @@ At this point, Kafka metrics and logs should start flowing into Sumo Logic. +## Installing Kafka Alerts + +This section and below provide instructions for installing the Sumo App and Alerts for Kafka and descriptions of each of the app dashboards. These instructions assume you have already set up the collection as described in [Collect Logs and Metrics for Kafka](#collecting-logs-and-metrics-for-kafka). + +#### Pre-Packaged Alerts + +Sumo Logic has provided out-of-the-box alerts available through [Sumo Logic monitors](/docs/alerts/monitors) to help you quickly determine if the Kafka cluster is available and performing as expected. These alerts are built based on metrics datasets and have preset thresholds based on industry best practices and recommendations. See [Kafka Alerts](#kafka-alerts) for more details. + +* To install these alerts, you need to have the Manage Monitors role capability. +* Alerts can be installed by either importing a JSON or a Terraform script. +* There are limits to how many alerts can be enabled - see the [Alerts FAQ](/docs/alerts/monitors/monitor-faq) for details. + + +### Method A: Importing a JSON file + +1. Download a[ JSON file](https://github.com/SumoLogic/terraform-sumologic-sumo-logic-monitor/blob/main/monitor_packages/kubernetes/kubernetes.json) that describes the monitors. + 1. The [JSON](https://github.com/SumoLogic/terraform-sumologic-sumo-logic-monitor/blob/main/monitor_packages/Kafka/Kafka_Alerts.json) contains the alerts that are based on Sumo Logic searches that do not have any scope filters and therefore will be applicable to all Kafka clusters, the data for which has been collected via the instructions in the previous sections. However, if you would like to restrict these alerts to specific clusters or environments, update the JSON file by replacing the text `'messaging_system=kafka `with `'`. Custom filter examples: + * For alerts applicable only to a specific cluster, your custom filter would be: `messaging_cluster=Kafka-prod.01` + * For alerts applicable to all clusters that start with Kafka-prod, your custom filter would be: `messaging_cluster=Kafka-prod*` + * For alerts applicable to a specific cluster within a production environment, your custom filter would be: `messaging_cluster=Kafka-1` and `environment=prod` (This assumes you have set the optional environment tag while configuring collection) + 2. [**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Monitoring > Monitors**.
[**New UI**](/docs/get-started/sumo-logic-ui). In the main Sumo Logic menu, select **Alerts > Monitors**. You can also click the **Go To...** menu at the top of the screen and select **Monitors**. + 3. Click **Add** + 4. Click Import to import monitors from the JSON above. + +The monitors are disabled by default. Once you have installed the alerts using this method, navigate to the Kafka folder under Monitors to configure them. See [this](/docs/alerts/monitors) document to enable monitors. To send notifications to teams or connections, see the instructions detailed in Step 4 of this [document](/docs/alerts/monitors/create-monitor). + +### Method B: Using a Terraform script + +1. Generate an access key and access ID for a user that has the Manage Monitors role capability in Sumo Logic using instructions in [Access Keys](/docs/manage/security/access-keys). Identify which deployment your Sumo Logic account is in using [this link](/docs/api/getting-started#sumo-logic-endpoints-by-deployment-and-firewall-security). +2. [Download and install Terraform 0.13](https://www.terraform.io/downloads.html) or later. +3. Download the Sumo Logic Terraform package for Kafka alerts. The alerts package is available in the Sumo Logic [GitHub repository](https://github.com/SumoLogic/terraform-sumologic-sumo-logic-monitor/tree/main/monitor_packages/Kafka). You can either download it through the “git clone” command or as a zip file. +4. Alert Configuration. After the package has been extracted, navigate to the package directory `terraform-sumologic-sumo-logic-monitor/monitor_packages/Kafka`. + 1. Edit the `monitor.auto.tfvars` file and add the Sumo Logic Access Key, Access Id and Deployment from Step 1. + ```bash + access_id = "" + access_key = "" + environment = "" + ``` + 2. The Terraform script installs the alerts without any scope filters, if you would like to restrict the alerts to specific clusters or environments, update the variable `’kafka_data_source’`. Custom filter examples: + * For alerts applicable only to a specific cluster, your custom filter would be: `messaging_cluster=Kafka-prod.01` + * For alerts applicable to all clusters that start with Kafka-prod, your custom filter would be: `messaging_cluster=Kafka-prod*` + * For alerts applicable to a specific cluster within a production environment, your custom filter would be: `messaging_cluster=Kafka-1` and `environment=prod`. This assumes you have set the optional environment tag while configuring collection. + +All monitors are disabled by default on installation, if you would like to enable all the monitors, set the parameter `monitors_disabled` to `false` in this file. + +By default, the monitors are configured in a monitor folder called “Kafka”, if you would like to change the name of the folder, update the monitor folder name in this file. + +5. To send email or connection notifications, modify the file `notifications.auto.tfvars` file and fill in the `connection_notifications` and `email_notifications` sections. See the examples for PagerDuty and email notifications below. See [this document](/docs/alerts/webhook-connections/set-up-webhook-connections) for creating payloads with other connection types. + +```bash title="Pagerduty Connection Example" +connection_notifications = [ + { + connection_type = "PagerDuty", + connection_id = "", + payload_override = "{\"service_key\": \"your_pagerduty_api_integration_key\",\"event_type\": \"trigger\",\"description\": \"Alert: Triggered {{TriggerType}} for Monitor {{Name}}\",\"client\": \"Sumo Logic\",\"client_url\": \"{{QueryUrl}}\"}", + run_for_trigger_types = ["Critical", "ResolvedCritical"] + }, + { + connection_type = "Webhook", + connection_id = "", + payload_override = "", + run_for_trigger_types = ["Critical", "ResolvedCritical"] + } + ] +``` + +Replace `` with the connection id of the webhook connection. The webhook connection id can be retrieved by calling the[ Monitors API](https://api.sumologic.com/docs/#operation/listConnections). + +```bash title="Email Notifications Example" +email_notifications = [ + { + connection_type = "Email", + recipients = ["abc@example.com"], + subject = "Monitor Alert: {{TriggerType}} on {{Name}}", + time_zone = "PST", + message_body = "Triggered {{TriggerType}} Alert on {{Name}}: {{QueryURL}}", + run_for_trigger_types = ["Critical", "ResolvedCritical"] + } + ] +``` + +6. Install the Alerts + 1. Navigate to the package directory `terraform-sumologic-sumo-logic-monitor/monitor_packages/Kafka/` and run terraform init. This will initialize Terraform and will download the required components. + 2. Run `terraform plan` to view the monitors which will be created/modified by Terraform. + 3. Run `terraform apply`. +7. **Post Installation.** If you haven’t enabled alerts and/or configured notifications through the Terraform procedure outlined above, we highly recommend enabling alerts of interest and configuring each enabled alert to send notifications to other people or services. This is detailed in Step 4 of[ this document](/docs/alerts/monitors/create-monitor). + ## Installing the Kafka App @@ -600,17 +726,6 @@ Use this dashboard to: ## Kafka Alerts -#### Pre-packaged alerts - -Sumo Logic has provided out-of-the-box alerts available through [monitors](/docs/alerts/monitors) to help you quickly determine if the Kafka cluster is available and performing as expected. These alerts are built based on metrics datasets and have preset thresholds based on industry best practices and recommendations. - -:::note -There are limits to how many alerts can be enabled. See [Monitors FAQ](/docs/alerts/monitors/monitor-faq) for details. -::: -:::note permissions required -To install these alerts, you need to have the [Manage Monitors role capability](/docs/manage/users-roles/roles/role-capabilities/#alerting). -::: - | Alert Name | Alert Description and conditions | Alert Condition | Recover Condition | |:---------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|:-------------------| | Kafka - High Broker Disk Utilization | This alert fires when we detect that a disk on a broker node is more than 85% full. | `>=`85 | < 85 | diff --git a/docs/integrations/containers-orchestration/rabbitmq.md b/docs/integrations/containers-orchestration/rabbitmq.md index d94aa8787c..a53acbcc89 100644 --- a/docs/integrations/containers-orchestration/rabbitmq.md +++ b/docs/integrations/containers-orchestration/rabbitmq.md @@ -50,22 +50,40 @@ Host: broker-1 Name: /var/log/rabbitmq/rabbit.log Category: logfile This section provides instructions for configuring log and metric collection for the Sumo Logic App for RabbitMQ. -### Step 1: Configure fields in Sumo Logic -The following [fields](/docs/manage/fields/) will always be created automatically as a part of the app installation process: -* `component` -* `environment` -* `messaging_system` -* `messaging_cluster` -* `pod` +### Step 1: Configure Fields in Sumo Logic -If you're using RabbitMQ in a Kubernetes environment, the following additional fields will be automatically created as a part of the app installation process: -* `pod_labels_component` -* `pod_labels_environment` -* `pod_labels_messaging_system` -* `pod_labels_messaging_cluster` +Create the following Fields in Sumo Logic prior to configuring collection. This ensures that your logs and metrics are tagged with relevant metadata, which is required by the app dashboards. For information on setting up fields, see [Sumo Logic Fields](/docs/manage/fields). + + + + + +If you're using RabbitMQ in a Kubernetes environment, create the fields: +* pod_labels_component +* pod_labels_environment +* pod_labels_messaging_system +* pod_labels_messaging_cluster + + + + +If you're using RabbitMQ in a non-Kubernetes environment, create the fields: +* component +* environment +* messaging_system +* messaging_cluster +* pod + + + -For information on setting up fields, see [Fields](/docs/manage/fields). ### Step 2: Configure Collection for RabbitMQ @@ -193,7 +211,26 @@ For all other parameters see [this doc](/docs/send-data/collect-from-other-data- kubectl describe pod ``` 5. Sumo Logic Kubernetes collection will automatically start collecting logs from the pods having the annotations defined above. -3. **FER to normalize the fields in Kubernetes environments**. Labels created in Kubernetes environments automatically are prefixed with `pod_labels`. To normalize these for our app to work, a Field Extraction Rule named **AppObservabilityMessagingRabbitMQFER** is automatically created. +3. **Add an FER to normalize the fields in Kubernetes environments**. Labels created in Kubernetes environments automatically are prefixed with `pod_labels`. To normalize these for our app to work, we need to create a Field Extraction Rule if not already created for Messaging Application Components. To do so: + 1. Go to **Manage Data > Logs > Field Extraction Rules**. + 2. Click the + Add button on the top right of the table. + 3. The **Add Field Extraction Rule** form will appear: + 4. Enter the following options: + * **Rule Name**. Enter the name as **App Observability - Messaging**. + * **Applied At.** Choose **Ingest Time** + * **Scope**. Select **Specific Data** + * **Scope**: Enter the following keyword search expression: + ```sql + pod_labels_environment=* pod_labels_component=messaging pod_labels_messaging_system=* pod_labels_messaging_cluster=* + ``` + * **Parse Expression**.Enter the following parse expression: + ```sql + | if (!isEmpty(pod_labels_environment), pod_labels_environment, "") as environment + | pod_labels_component as component + | pod_labels_messaging_system as messaging_system + | pod_labels_messaging_cluster as messaging_cluster + ``` + 5. Click **Save** to create the rule. @@ -324,17 +361,98 @@ At this point, RabbitMQ logs should start flowing into Sumo Logic. -## RabbitMQ Monitors +## Installing Monitors -import CreateMonitors from '../../reuse/apps/create-monitors.md'; +These instructions assume you have already set up collection as described in the [Collect Logs and Metrics for RabbitMQ](#collecting-logs-and-metrics-for-rabbitmq). - +Sumo Logic has provided pre-packaged alerts available through [Sumo Logic monitors](/docs/alerts/monitors) to help you proactively determine if a RabbitMQ cluster is available and performing as expected. These monitors are based on metric and log data and include pre-set thresholds that reflect industry best practices and recommendations. For more information about individual alerts, see [RabbitMQ Alerts](#rabbitmq-alerts). + +To install these monitors, you must have the **Manage Monitors** role capability. + +You can install monitors by importing a JSON file or using a Terraform script. + +There are limits to how many alerts can be enabled. For more information, see [Monitors](/docs/alerts/monitors/create-monitor) for details. + + +#### Method A: Install Monitors by importing a JSON file + +1. Download the [JSON file](https://github.com/SumoLogic/terraform-sumologic-sumo-logic-monitor/blob/main/monitor_packages/RabbitMQ/rabbitmq.json) that describes the monitors. +2. The [JSON](https://github.com/SumoLogic/terraform-sumologic-sumo-logic-monitor/blob/main/monitor_packages/RabbitMQ/rabbitmq.json) contains the alerts that are based on Sumo Logic searches that do not have any scope filters and therefore will be applicable to all RabbitMQ clusters, the data for which has been collected via the instructions in the previous sections. However, if you would like to restrict these alerts to specific clusters or environments, update the JSON file by replacing the text `messaging_cluster=*` with ``. Custom filter examples: + * For alerts applicable only to a specific cluster, your custom filter would be: `messaging_cluster=dev-rabbitmq01` + * For alerts applicable to all clusters that start with RabbitMQ-prod, your custom filter would be: `messaging_cluster=RabbitMQ-prod*` + * For alerts applicable to a specific cluster within a production environment, your custom filter would be: `messaging_cluster=dev-rabbitmq01 AND environment=prod` (This assumes you have set the optional environment tag while configuring collection) +3. [**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Monitoring > Monitors**.
[**New UI**](/docs/get-started/sumo-logic-ui). In the main Sumo Logic menu, select **Alerts > Monitors**. You can also click the **Go To...** menu at the top of the screen and select **Monitors**. +4. Click **Add**. +5. Click **Import**. +6. On the **Import Content popup**, enter **RabbitMQ** in the Name field, paste in the JSON into the the popup, and click **Import**. +7. The monitors are created in a "RabbitMQ" folder. The monitors are disabled by default. See the [Monitors](/docs/alerts/monitors) topic for information about enabling monitors and configuring notifications or connections. + +#### Method B: Install Monitors using a Terraform script + +1. Generate an access key and access ID for a user that has the **Manage Monitors** role capability. For instructions see [Access Keys](/docs/manage/security/access-keys). +2. Download [Terraform 0.13](https://www.terraform.io/downloads.html) or later, and install it. +3. Download the Sumo Logic Terraform package for MySQL monitors: The alerts package is available in the Sumo Logic GitHub [repository](https://github.com/SumoLogic/terraform-sumologic-sumo-logic-monitor/tree/main/monitor_packages/mysql). You can either download it using the git clone command or as a zip file. +4. Alert Configuration: After extracting the package, navigate to the terraform-sumologic-sumo-logic-monitor/monitor_packages/RabbitMQ/ directory. + +Edit the rabbitmq.auto.tfvars file and add the Sumo Logic Access Key and Access ID from Step 1 and your Sumo Logic deployment. If you're not sure of your deployment, see [Sumo Logic Endpoints and Firewall Security](/docs/api/getting-started#sumo-logic-endpoints-by-deployment-and-firewall-security). +```bash +access_id = "" +access_key = "" +environment = "" +``` + +The Terraform script installs the alerts without any scope filters, if you would like to restrict the alerts to specific clusters or environments, update the `rabbitmq_data_source` variable. For example: +* To configure alerts for A specific cluster, set `rabbitmq_data_source` to something like: messaging_cluster=rabbitmq.prod.01 +* To configure alerts for All clusters in an environment, set `rabbitmq_data_source` to something like: environment=prod +* To configure alerts for Multiple clusters using a wildcard, set `rabbitmq_data_source` to something like: `messaging_cluster=rabbitmq-prod*` +* To configure alerts for A specific cluster within a specific environment, set `rabbitmq_data_source` to something like: `messaging_cluster=rabbitmq-1 and environment=prod`. This assumes you have configured and applied Fields as described in Step 1: Configure Fields of the Sumo Logic of the Collect Logs and Metrics for RabbitMQ. + +All monitors are disabled by default on installation. To enable all of the monitors, set the monitors_disabled parameter to false. + +By default, the monitors will be located in a "RabbitMQ" folder on the **Monitors** page. To change the name of the folder, update the monitor folder name in the folder variable in the rabbitmq.auto.tfvars file. + +5. If you want the alerts to send email or connection notifications, edit the `rabbitmq_notifications.auto.tfvars` file to populate the `connection_notifications` and `email_notifications` sections. Examples are provided below. + +In the variable definition below, replace `` with the connection ID of the Webhook connection. You can obtain the Webhook connection ID by calling the [Monitors API](https://api.sumologic.com/docs/#operation/listConnections). + +```bash title="Pagerduty connection example" +connection_notifications = [ + { + connection_type = "PagerDuty", + connection_id = "", + payload_override = "{\"service_key\": \"your_pagerduty_api_integration_key\",\"event_type\": \"trigger\",\"description\": \"Alert: Triggered {{TriggerType}} for Monitor {{Name}}\",\"client\": \"Sumo Logic\",\"client_url\": \"{{QueryUrl}}\"}", + run_for_trigger_types = ["Critical", "ResolvedCritical"] + }, + { + connection_type = "Webhook", + connection_id = "", + payload_override = "", + run_for_trigger_types = ["Critical", "ResolvedCritical"] + } + ] +``` + +For information about overriding the payload for different connection types, see [Set Up Webhook Connections](/docs/alerts/webhook-connections/set-up-webhook-connections). + +```bash title="Email notifications example" +email_notifications = [ + { + connection_type = "Email", + recipients = ["abc@example.com"], + subject = "Monitor Alert: {{TriggerType}} on {{Name}}", + time_zone = "PST", + message_body = "Triggered {{TriggerType}} Alert on {{Name}}: {{QueryURL}}", + run_for_trigger_types = ["Critical", "ResolvedCritical"] + } + ] +``` + +6. Install Monitors: + 1. Navigate to the `terraform-sumologic-sumo-logic-monitor/monitor_packages/rabbitmq/` directory and run terraform init. This will initialize Terraform and download the required components. + 2. Run `terraform plan` to view the monitors that Terraform will create or modify. + 3. Run `terraform apply`. -There are limits to how many alerts can be enabled. -:::note permissions required -To install these monitors, you need to have the [Manage Monitors role capability](/docs/manage/users-roles/roles/role-capabilities/#alerting). -::: ## Installing the RabbitMQ App This section demonstrates how to install the RabbitMQ App. From 390a16703878acc30639ad027004e700a79fe6a7 Mon Sep 17 00:00:00 2001 From: sumoanema Date: Mon, 7 Apr 2025 15:50:59 +0530 Subject: [PATCH 09/10] Changes for reusable section update and updating reference in all the involved apps since they are not migrated to v2 apps --- docs/integrations/app-development/bitbucket.md | 16 +++++----------- docs/integrations/app-development/github.md | 13 +++++++------ docs/integrations/app-development/gitlab.md | 12 ++++++------ docs/integrations/app-development/jira-cloud.md | 13 ++++--------- docs/integrations/saas-cloud/opsgenie.md | 12 ++++-------- docs/integrations/saas-cloud/pagerduty-v2.md | 14 ++++---------- docs/integrations/saas-cloud/pagerduty-v3.md | 14 ++++---------- docs/reuse/apps/app-collection-option-1.md | 10 +++++----- docs/reuse/apps/app-collection-option-2.md | 10 +++++----- docs/reuse/apps/app-collection-option-3.md | 8 ++++---- docs/reuse/apps/app-install-v2.md | 8 ++++---- docs/reuse/apps/app-update.md | 8 ++++---- 12 files changed, 56 insertions(+), 82 deletions(-) diff --git a/docs/integrations/app-development/bitbucket.md b/docs/integrations/app-development/bitbucket.md index 671c98c78a..48db5eaa27 100644 --- a/docs/integrations/app-development/bitbucket.md +++ b/docs/integrations/app-development/bitbucket.md @@ -140,25 +140,19 @@ If you want to deployment events to multiple Sumo Logic orgs, include a `-pipe` For reference: This is how the [bitbucket-pipelines.yml](https://bitbucket.org/app-dev-sumo/backendservice/src/master/bitbucket-pipelines.yml) looks after adding deploy pipe code to our sample Bitbucket CI/CD pipeline. -### Step 4: Enable Bitbucket Event-Key tagging at Sumo Logic +### Step 4: Bitbucket Event-Key tagging at Sumo Logic To properly identify the event type for incoming events (for example, repo:push events), Sumo Logic automatically adds the [X-Event-Key](https://confluence.atlassian.com/bitbucket/event-payloads-740262817.html#EventPayloads-HTTPheaders) event type to the [Fields](/docs/manage/fields) during app installation. ## Installing the Bitbucket App -This section provides instructions for installing the Bitbucket app. - -import AppInstall from '../../reuse/apps/app-install.md'; - - +import AppInstall2 from '../../reuse/apps/app-install-v2.md'; + ## Viewing Bitbucket Dashboards -This section provides descriptions and examples for each of the pre-configured app dashboards. - -:::tip Filter with template variables -Template variables provide dynamic dashboards that can rescope data on the fly. As you apply variables to troubleshoot through your dashboard, you view dynamic changes to the data for a quicker resolution to the root cause. You can use template variables to drill down and examine the data on a granular level. For more information, see [Filter with template variables](/docs/dashboards/filter-template-variables.md). -::: +import ViewDashboards from '../../reuse/apps/view-dashboards.md'; + ### Overview diff --git a/docs/integrations/app-development/github.md b/docs/integrations/app-development/github.md index bca11d4538..e6ae61bb6a 100644 --- a/docs/integrations/app-development/github.md +++ b/docs/integrations/app-development/github.md @@ -156,17 +156,15 @@ To configure a GitHub Webhook: 6. Click **Add Webhook**. -### Enable GitHub Event tagging at Sumo Logic +### GitHub Event tagging at Sumo Logic -To properly identify the event type for incoming events, Sumo Logic automatically adds the [x-github-event](https://docs.github.com/en/developers/webhooks-and-events/webhook-events-and-payloads) event type to the [Fields](/docs/manage/fields) during app installation. +To properly identify the event type for incoming events (for example, repo:push events), Sumo Logic automatically adds the [x-github-event](https://docs.github.com/en/developers/webhooks-and-events/webhook-events-and-payloads) event type to the [Fields](/docs/manage/fields) during app installation. ## Installing the GitHub App -Now that you have set up collector GitHub, install the Sumo Logic App for GitHub to use the preconfigured searches and dashboards to analyze your data. -import AppInstall from '../../reuse/apps/app-install.md'; - - +import AppInstall2 from '../../reuse/apps/app-install-v2.md'; + #### Troubleshooting @@ -178,6 +176,9 @@ If you are getting the following error after installing the app - `Field x-githu ## Viewing ​GitHub Dashboards +import ViewDashboards from '../../reuse/apps/view-dashboards.md'; + + ### Overview The **GitHub - Overview** dashboard provides an at-a-glance view of your GitHub issues, pull requests, and the commits over time. diff --git a/docs/integrations/app-development/gitlab.md b/docs/integrations/app-development/gitlab.md index 5af4877e8c..0f27d1842c 100644 --- a/docs/integrations/app-development/gitlab.md +++ b/docs/integrations/app-development/gitlab.md @@ -91,17 +91,14 @@ You can register webhooks for a [Group](https://docs.gitlab.com/ee/user/project/ Refer to the [GitLab Webhooks documentation](https://docs.gitlab.com/ee/user/project/integrations/webhooks.html#configure-a-webhook) to understand more. -### Step 3: Enable GitLab Event tagging at Sumo Logic +### Step 3: GitLab Event tagging at Sumo Logic To properly identify the event type for incoming events, Sumo Logic automatically adds the [x-gitlab-event](https://docs.gitlab.com/ee/user/project/integrations/webhook_events.html#push-events) event type to the [Fields](/docs/manage/fields) during app installation. ## Installing the GitLab App - -import AppInstall from '../../reuse/apps/app-install.md'; - - - +import AppInstall2 from '../../reuse/apps/app-install-v2.md'; + ### Troubleshooting @@ -124,6 +121,9 @@ Do the following to resolve: ## Viewing GitLab Dashboards +import ViewDashboards from '../../reuse/apps/view-dashboards.md'; + + ### Overview The **GitLab - Overview** dashboard provides users with a high-level view of events such as Issues, Merge Requests, Builds, Deployments, and pipelines. diff --git a/docs/integrations/app-development/jira-cloud.md b/docs/integrations/app-development/jira-cloud.md index 765d7d5ae8..05a33294df 100644 --- a/docs/integrations/app-development/jira-cloud.md +++ b/docs/integrations/app-development/jira-cloud.md @@ -169,18 +169,13 @@ When you configure the Webhook, enter the URL for the HTTP source you created in ## Installing the Jira Cloud App -This section demonstrates how to install the Jira Cloud App. - -import AppInstall from '../../reuse/apps/app-install.md'; - - +import AppInstall2 from '../../reuse/apps/app-install-v2.md'; + ## Viewing Jira Cloud Dashboards -:::tip Filter with template variables -Template variables provide dynamic dashboards that can rescope data on the fly. As you apply variables to troubleshoot through your dashboard, you view dynamic changes to the data for a quicker resolution to the root cause. You can use template variables to drill down and examine the data on a granular level. For more information, see [Filter with template variables](/docs/dashboards/filter-template-variables.md). -::: - +import ViewDashboards from '../../reuse/apps/view-dashboards.md'; + ### Issue Overview diff --git a/docs/integrations/saas-cloud/opsgenie.md b/docs/integrations/saas-cloud/opsgenie.md index 8f948f516c..df9aff89c7 100644 --- a/docs/integrations/saas-cloud/opsgenie.md +++ b/docs/integrations/saas-cloud/opsgenie.md @@ -86,17 +86,13 @@ To configure log collection for the Opsgenie App, do the following: ## Installing the Opsgenie App -This section provides instructions for installing the Opsgenie App, as well as examples of each of the app dashboards. - -import AppInstall from '../../reuse/apps/app-install.md'; - - +import AppInstall2 from '../../reuse/apps/app-install-v2.md'; + ## Viewing OpsGenie Dashboards -:::tip Filter with template variables -Template variables provide dynamic dashboards that can rescope data on the fly. As you apply variables to troubleshoot through your dashboard, you view dynamic changes to the data for a quicker resolution to the root cause. You can use template variables to drill down and examine the data on a granular level. For more information, see [Filter with template variables](/docs/dashboards/filter-template-variables.md). -::: +import ViewDashboards from '../../reuse/apps/view-dashboards.md'; + ### Overview diff --git a/docs/integrations/saas-cloud/pagerduty-v2.md b/docs/integrations/saas-cloud/pagerduty-v2.md index 4d9ebdf38b..1ce218fd20 100644 --- a/docs/integrations/saas-cloud/pagerduty-v2.md +++ b/docs/integrations/saas-cloud/pagerduty-v2.md @@ -70,19 +70,13 @@ To create a PagerDuty V2 Webhook, do the following: ## Installing the PagerDuty V2 App -This section provides instructions for installing the Sumo App for PagerDuty V2. - -Now that you have set up a log and metric collection, you can install the Sumo Logic App for PagerDuty V2, and use its pre-configured searches and dashboards. - -import AppInstall from '../../reuse/apps/app-install.md'; - - +import AppInstall2 from '../../reuse/apps/app-install-v2.md'; + ## Viewing PagerDuty v2 Dashboards -:::tip Filter with template variables -Template variables provide dynamic dashboards that can rescope data on the fly. As you apply variables to troubleshoot through your dashboard, you view dynamic changes to the data for a quicker resolution to the root cause. You can use template variables to drill down and examine the data on a granular level. For more information, see [Filter with template variables](/docs/dashboards/filter-template-variables.md). -::: +import ViewDashboards from '../../reuse/apps/view-dashboards.md'; + ### Overview diff --git a/docs/integrations/saas-cloud/pagerduty-v3.md b/docs/integrations/saas-cloud/pagerduty-v3.md index 24aaea2641..81cf334df0 100644 --- a/docs/integrations/saas-cloud/pagerduty-v3.md +++ b/docs/integrations/saas-cloud/pagerduty-v3.md @@ -93,19 +93,13 @@ In the next section, install the Sumo Logic App for PagerDuty V3. ## Installing the PagerDuty V3 App -This section provides instructions for installing the Sumo App for PagerDuty V3. - -Now that you have set up a log and metric collection, you can install the Sumo Logic App for PagerDuty V3, and use its pre-configured searches and dashboards. - -import AppInstall from '../../reuse/apps/app-install.md'; - - +import AppInstall2 from '../../reuse/apps/app-install-v2.md'; + ## Viewing PagerDuty V3 Dashboards -:::tip Filter with template variables -Template variables provide dynamic dashboards that can rescope data on the fly. As you apply variables to troubleshoot through your dashboard, you view dynamic changes to the data for a quicker resolution to the root cause. You can use template variables to drill down and examine the data on a granular level. For more information, see [Filter with template variables](/docs/dashboards/filter-template-variables.md). -::: +import ViewDashboards from '../../reuse/apps/view-dashboards.md'; + ### Overview diff --git a/docs/reuse/apps/app-collection-option-1.md b/docs/reuse/apps/app-collection-option-1.md index 5c60ad340d..14b1654ca6 100644 --- a/docs/reuse/apps/app-collection-option-1.md +++ b/docs/reuse/apps/app-collection-option-1.md @@ -1,5 +1,7 @@ To set up collection and install the app, do the following: - +:::note + Next-Gen App: To install or update the app, you must be an account administrator or a user with Manage Apps, Manage Monitors, Manage Fields, Manage Metric Rules, Manage Collectors capability depending upon the different content types part of the app. +::: 1. Select **App Catalog**. 1. In the 🔎 **Search Apps** field, run a search for your desired app, then select it. 1. Click **Install App**. @@ -13,11 +15,9 @@ To set up collection and install the app, do the following: * ![green check circle.png](/img/reuse/green-check-circle.png) A green circle with a check mark is shown when the field exists and is enabled in the Fields table schema. * ![orange exclamation point.png](/img/reuse/orange-exclamation-point.png) An orange triangle with an exclamation point is shown when the field doesn't exist, or is disabled, in the Fields table schema. In this case, an option to automatically add or enable the nonexistent fields to the Fields table schema is provided. If a field is sent to Sumo that does not exist in the Fields schema or is disabled it is ignored, known as dropped. 1. Click **Next**. -1. Use the new [Cloud-to-Cloud Integration](/docs/send-data/hosted-collectors/cloud-to-cloud-integration-framework/) to configure the source. +1. Configure the source as specified in the `Info` box above, ensuring all required fields are included. 1. In the **Configure** section of your respective app, complete the following fields. - 1. **Key**. Select either of these options for the data source. - * Choose **Source Category** and select a source category from the list for **Default Value**. - * Choose **Custom**, and enter a custom metadata field. Insert its value in **Default Value**. + 1. **Field Name**. If you already have collectors and sources setup, select the configured metadata field name (eg _sourcecategory) or specify other custom metadata (eg: _collector) along with its metadata **Field Value** . 1. Click **Next**. You will be redirected to the **Preview & Done** section. **Post-installation** diff --git a/docs/reuse/apps/app-collection-option-2.md b/docs/reuse/apps/app-collection-option-2.md index 1c6ec4f5f2..d7d2ef413e 100644 --- a/docs/reuse/apps/app-collection-option-2.md +++ b/docs/reuse/apps/app-collection-option-2.md @@ -1,5 +1,7 @@ To setup source in the existing collector and install the app, do the following: - +:::note + Next-Gen App: To install or update the app, you must be an account administrator or a user with Manage Apps, Manage Monitors, Manage Fields, Manage Metric Rules, Manage Collectors capability depending upon the different content types part of the app. +::: 1. Select **App Catalog**. 1. In the 🔎 **Search Apps** field, run a search for your desired app, then select it. 1. Click **Install App**. @@ -8,11 +10,9 @@ To setup source in the existing collector and install the app, do the following: ::: 1. In the **Set Up Collection** section of your respective app, select **Use an existing Collector**. 1. From the **Select Collector** dropdown, select the collector that you want to setup your source with and click **Next**. -1. Use the new [Cloud-to-Cloud Integration](/docs/send-data/hosted-collectors/cloud-to-cloud-integration-framework/) to configure the source. +1. Configure the source as specified in the `Info` box above, ensuring all required fields are included. 1. In the **Configure** section of your respective app, complete the following fields. - 1. **Key**. Select either of these options for the data source. - * Choose **Source Category** and select a source category from the list for **Default Value**. - * Choose **Custom**, and enter a custom metadata field. Insert its value in **Default Value**. + 1. **Field Name**. If you already have collectors and sources setup, select the configured metadata field name (eg _sourcecategory) or specify other custom metadata (eg: _collector) along with its metadata **Field Value** . 1. Click **Next**. You will be redirected to the **Preview & Done** section. **Post-installation** diff --git a/docs/reuse/apps/app-collection-option-3.md b/docs/reuse/apps/app-collection-option-3.md index 72bcf58413..fcc615d915 100644 --- a/docs/reuse/apps/app-collection-option-3.md +++ b/docs/reuse/apps/app-collection-option-3.md @@ -1,5 +1,7 @@ To skip collection and only install the app, do the following: - +:::note + Next-Gen App: To install or update the app, you must be an account administrator or a user with Manage Apps, Manage Monitors, Manage Fields, Manage Metric Rules, Manage Collectors capability depending upon the different content types part of the app. +::: 1. Select **App Catalog**. 1. In the 🔎 **Search Apps** field, run a search for your desired app, then select it. 1. Click **Install App**. @@ -8,9 +10,7 @@ To skip collection and only install the app, do the following: ::: 1. In the **Set Up Collection** section of your respective app, select **Skip this step and use existing source** and click **Next**. 1. In the **Configure** section of your respective app, complete the following fields. - 1. **Key**. Select either of these options for the data source. - * Choose **Source Category** and select a source category from the list for **Default Value**. - * Choose **Custom**, and enter a custom metadata field. Insert its value in **Default Value**. + 1. **Field Name**. If you already have collectors and sources setup, select the configured metadata field name (eg _sourcecategory) or specify other custom metadata (eg: _collector) along with its metadata **Field Value** . 1. Click **Next**. You will be redirected to the **Preview & Done** section. **Post-installation** diff --git a/docs/reuse/apps/app-install-v2.md b/docs/reuse/apps/app-install-v2.md index 01aa4c7c20..1221a0f471 100644 --- a/docs/reuse/apps/app-install-v2.md +++ b/docs/reuse/apps/app-install-v2.md @@ -1,5 +1,7 @@ To install the app, do the following: - +:::note + Next-Gen App: To install or update the app, you must be an account administrator or a user with Manage Apps, Manage Monitors, Manage Fields, Manage Metric Rules, Manage Collectors capability depending upon the different content types part of the app. +::: 1. Select **App Catalog**. 1. In the 🔎 **Search Apps** field, run a search for your desired app, then select it. 1. Click **Install App**. @@ -8,9 +10,7 @@ To install the app, do the following: ::: 1. Click **Next** in the **Setup Data** section. 1. In the **Configure** section of your respective app, complete the following fields. - 1. **Key**. Select either of these options for the data source. - * Choose **Source Category** and select a source category from the list for **Default Value**. - * Choose **Custom**, and enter a custom metadata field. Insert its value in **Default Value**. + 1. **Field Name**. If you already have collectors and sources setup, select the configured metadata field name (eg _sourcecategory) or specify other custom metadata (eg: _collector) along with its metadata **Field Value** . 1. Click **Next**. You will be redirected to the **Preview & Done** section. **Post-installation** diff --git a/docs/reuse/apps/app-update.md b/docs/reuse/apps/app-update.md index b3add69683..90eeedf29b 100644 --- a/docs/reuse/apps/app-update.md +++ b/docs/reuse/apps/app-update.md @@ -1,14 +1,14 @@ To update the app, do the following: - +:::note + Next-Gen App: To install or update the app, you must be an account administrator or a user with Manage Apps, Manage Monitors, Manage Fields, Manage Metric Rules, Manage Collectors capability depending upon the different content types part of the app. +::: 1. Select **App Catalog**. 1. In the **Search Apps** field, search for and then select your app.
Optionally, you can identify apps that can be upgraded in the **Upgrade available** section. 1. To upgrade the app, select **Upgrade** from the **Manage** dropdown. 1. If the upgrade does not have any configuration or property changes, you will be redirected to the **Preview & Done** section. 1. If the upgrade has any configuration or property changes, you will be redirected to **Setup Data** page. 1. In the **Configure** section of your respective app, complete the following fields. - - **Key**. Select either of these options for the data source. - * Choose **Source Category** and select a source category from the list for **Default Value**. - * Choose **Custom** and enter a custom metadata field. Insert its value in **Default Value**. + 1. **Field Name**. If you already have collectors and sources setup, select the configured metadata field name (eg _sourcecategory) or specify other custom metadata (eg: _collector) along with its metadata **Field Value** . 1. Click **Next**. You will be redirected to the **Preview & Done** section. **Post-update** From f5a1ff92b1de7f8dadeac2d9a78c6ee223fc170e Mon Sep 17 00:00:00 2001 From: sumoanema Date: Mon, 7 Apr 2025 16:06:05 +0530 Subject: [PATCH 10/10] syntax fix for importing common reuse files --- docs/integrations/app-development/bitbucket.md | 2 ++ docs/integrations/app-development/github.md | 2 ++ docs/integrations/app-development/gitlab.md | 2 ++ docs/integrations/app-development/jira-cloud.md | 2 ++ docs/integrations/saas-cloud/opsgenie.md | 2 ++ docs/integrations/saas-cloud/pagerduty-v2.md | 2 ++ docs/integrations/saas-cloud/pagerduty-v3.md | 2 ++ 7 files changed, 14 insertions(+) diff --git a/docs/integrations/app-development/bitbucket.md b/docs/integrations/app-development/bitbucket.md index 48db5eaa27..4e38d6e943 100644 --- a/docs/integrations/app-development/bitbucket.md +++ b/docs/integrations/app-development/bitbucket.md @@ -147,11 +147,13 @@ To properly identify the event type for incoming events (for example, repo:push ## Installing the Bitbucket App import AppInstall2 from '../../reuse/apps/app-install-v2.md'; + ## Viewing Bitbucket Dashboards import ViewDashboards from '../../reuse/apps/view-dashboards.md'; + ### Overview diff --git a/docs/integrations/app-development/github.md b/docs/integrations/app-development/github.md index e6ae61bb6a..2e6a8dc3a7 100644 --- a/docs/integrations/app-development/github.md +++ b/docs/integrations/app-development/github.md @@ -164,6 +164,7 @@ To properly identify the event type for incoming events (for example, repo:push import AppInstall2 from '../../reuse/apps/app-install-v2.md'; + #### Troubleshooting @@ -177,6 +178,7 @@ If you are getting the following error after installing the app - `Field x-githu ## Viewing ​GitHub Dashboards import ViewDashboards from '../../reuse/apps/view-dashboards.md'; + ### Overview diff --git a/docs/integrations/app-development/gitlab.md b/docs/integrations/app-development/gitlab.md index 0f27d1842c..b0f5ae6fe8 100644 --- a/docs/integrations/app-development/gitlab.md +++ b/docs/integrations/app-development/gitlab.md @@ -98,6 +98,7 @@ To properly identify the event type for incoming events, Sumo Logic automaticall ## Installing the GitLab App import AppInstall2 from '../../reuse/apps/app-install-v2.md'; + ### Troubleshooting @@ -122,6 +123,7 @@ Do the following to resolve: ## Viewing GitLab Dashboards import ViewDashboards from '../../reuse/apps/view-dashboards.md'; + ### Overview diff --git a/docs/integrations/app-development/jira-cloud.md b/docs/integrations/app-development/jira-cloud.md index 05a33294df..986ad6eb0b 100644 --- a/docs/integrations/app-development/jira-cloud.md +++ b/docs/integrations/app-development/jira-cloud.md @@ -170,11 +170,13 @@ When you configure the Webhook, enter the URL for the HTTP source you created in ## Installing the Jira Cloud App import AppInstall2 from '../../reuse/apps/app-install-v2.md'; + ## Viewing Jira Cloud Dashboards import ViewDashboards from '../../reuse/apps/view-dashboards.md'; + ### Issue Overview diff --git a/docs/integrations/saas-cloud/opsgenie.md b/docs/integrations/saas-cloud/opsgenie.md index df9aff89c7..1d39282e69 100644 --- a/docs/integrations/saas-cloud/opsgenie.md +++ b/docs/integrations/saas-cloud/opsgenie.md @@ -87,11 +87,13 @@ To configure log collection for the Opsgenie App, do the following: ## Installing the Opsgenie App import AppInstall2 from '../../reuse/apps/app-install-v2.md'; + ## Viewing OpsGenie Dashboards import ViewDashboards from '../../reuse/apps/view-dashboards.md'; + ### Overview diff --git a/docs/integrations/saas-cloud/pagerduty-v2.md b/docs/integrations/saas-cloud/pagerduty-v2.md index 1ce218fd20..e1ee60ca58 100644 --- a/docs/integrations/saas-cloud/pagerduty-v2.md +++ b/docs/integrations/saas-cloud/pagerduty-v2.md @@ -71,11 +71,13 @@ To create a PagerDuty V2 Webhook, do the following: ## Installing the PagerDuty V2 App import AppInstall2 from '../../reuse/apps/app-install-v2.md'; + ## Viewing PagerDuty v2 Dashboards import ViewDashboards from '../../reuse/apps/view-dashboards.md'; + ### Overview diff --git a/docs/integrations/saas-cloud/pagerduty-v3.md b/docs/integrations/saas-cloud/pagerduty-v3.md index 81cf334df0..2492d34215 100644 --- a/docs/integrations/saas-cloud/pagerduty-v3.md +++ b/docs/integrations/saas-cloud/pagerduty-v3.md @@ -94,11 +94,13 @@ In the next section, install the Sumo Logic App for PagerDuty V3. ## Installing the PagerDuty V3 App import AppInstall2 from '../../reuse/apps/app-install-v2.md'; + ## Viewing PagerDuty V3 Dashboards import ViewDashboards from '../../reuse/apps/view-dashboards.md'; + ### Overview