From e5fd5013b9f055c29787a648f157549bd7a99a38 Mon Sep 17 00:00:00 2001 From: John Pipkin Date: Thu, 25 Jul 2024 11:07:18 -0500 Subject: [PATCH 1/2] Clean up 'Manage Data >' instances --- docs/alerts/webhook-connections/pagerduty.md | 4 ++-- docs/contributing/style-guide.md | 16 ++++++++-------- .../ingestion/cse-ingestion-best-practices.md | 17 ++++------------- .../containers-orchestration/kubernetes.md | 4 ++-- .../containers-orchestration/rabbitmq.md | 2 +- docs/integrations/databases/couchbase.md | 8 ++++---- docs/integrations/databases/mariadb.md | 4 ++-- docs/integrations/databases/memcached.md | 4 ++-- docs/integrations/databases/mongodb.md | 4 ++-- docs/integrations/databases/mysql.md | 4 ++-- docs/integrations/databases/redis.md | 8 ++++---- .../host-process-metrics.md | 6 +++--- docs/integrations/microsoft-azure/sql-server.md | 8 ++++---- docs/integrations/web-servers/iis-10.md | 2 +- docs/integrations/web-servers/nginx-ingress.md | 4 ++-- .../web-servers/nginx-plus-ingress.md | 2 +- docs/integrations/web-servers/nginx-plus.md | 2 +- docs/integrations/web-servers/nginx.md | 4 ++-- .../collect-ruby-on-rails-logs.md | 5 ++++- 19 files changed, 51 insertions(+), 57 deletions(-) diff --git a/docs/alerts/webhook-connections/pagerduty.md b/docs/alerts/webhook-connections/pagerduty.md index f99ada0328..266bd5db3c 100644 --- a/docs/alerts/webhook-connections/pagerduty.md +++ b/docs/alerts/webhook-connections/pagerduty.md @@ -104,8 +104,8 @@ The URL and supported payload are different based on the version of the PagerDut ### Events API v1 -1. Go to **Manage Data > Alerts > Connections**. -1. On the Connections page, click **Add**. +1. In the main Sumo Logic menu, select **Manage Data > Monitoring > Connections**. +1. On the Connections page, click **+**. 1. Click **PagerDuty**. 1. In the Create Connection dialog, enter the name of the Connection. 1. (Optional) Enter a **Description** for the Connection. diff --git a/docs/contributing/style-guide.md b/docs/contributing/style-guide.md index 2b18bea570..8040368600 100644 --- a/docs/contributing/style-guide.md +++ b/docs/contributing/style-guide.md @@ -1410,14 +1410,14 @@ See the following tabbed code examples: -Setup a Source in Sumo Logic: +Set up a Source in Sumo Logic: -Navigate to Collection management (Manage Data > Collection) -Use an existing Hosted Collector, or create a new one. -Next to the collector, select “Add Source”. -Select “AWS Kinesis Firehose for Logs” -Enter a Name to identify the source. -Enter a Source Category following the best practices found in “Good Source Category, Bad Source Category” +1. Navigate to Collection management. +1. Use an existing Hosted Collector, or create a new one. +1. Next to the collector, select **Add Source**. +1. Select **AWS Kinesis Firehose for Logs**. +1. Enter a **Name** to identify the source. +1. Enter a **Source Category** following the best practices found in “Good Source Category, Bad Source Category”. Deploy the Cloudformation Template to Create a Kinesis Firehose Delivery Stream: @@ -1426,7 +1426,7 @@ Deploy the Cloudformation Template to Create a Kinesis Firehose Delivery Stream: 1. Create a new stack using the CloudFormation template you downloaded. 1. Provide the URL you created from your Sumo source. 1. Select an S3 bucket to deliver failed logs, or create a new one. -1. Click next. +1. Click **Next**. Accept the IAM permissions, and create the stack. diff --git a/docs/cse/ingestion/cse-ingestion-best-practices.md b/docs/cse/ingestion/cse-ingestion-best-practices.md index f5cc10897c..c668451cbd 100644 --- a/docs/cse/ingestion/cse-ingestion-best-practices.md +++ b/docs/cse/ingestion/cse-ingestion-best-practices.md @@ -27,17 +27,11 @@ You can only send log data that resides in the [Continuous data tier](/docs/mana We recommend the following ingestion processes, starting with the most preferred: -1. **Follow an ingestion guide**. The [Ingestion Guides](/docs/cse/ingestion) section of this help site provides specific collection and ingestion recommendations for many common products and services. An ingestion guide describes the easiest way to get data from a particular product into Cloud SIEM. When you’re ready to start using Cloud SIEM to monitor a new product, if there’s a Cloud SIEM ingestion guide for it, we recommend using it.  -   -1. **Use a Cloud-to-Cloud (C2C) connector**. If you don’t see an Ingestion Guide for your data source, check to see if there is a C2C connector. It’s an easy method, because if you configure your C2C source to send logs to Cloud SIEM, it automatically tags messages it sends to Cloud SIEM with fields that contain the mapping hints that Cloud SIEM requires.  - - Most C2C connectors have a **Forward to SIEM** option in the configuration UI. If a C2C connector lacks that option, you can achieve the same effect by assigning a field named `_siemforward`, set to *true*, to the connector. - - For information about what C2C sources are available, see Cloud-to-Cloud Integration Framework. -   +1. **Follow an ingestion guide**. The [Ingestion Guides](/docs/cse/ingestion) section of this help site provides specific collection and ingestion recommendations for many common products and services. An ingestion guide describes the easiest way to get data from a particular product into Cloud SIEM. When you’re ready to start using Cloud SIEM to monitor a new product, if there’s a Cloud SIEM ingestion guide for it, we recommend using it. +1. **Use a Cloud-to-Cloud (C2C) connector**. If you don’t see an Ingestion Guide for your data source, check to see if there is a C2C connector. It’s an easy method, because if you configure your C2C source to send logs to Cloud SIEM, it automatically tags messages it sends to Cloud SIEM with fields that contain the mapping hints that Cloud SIEM requires. 

Most C2C connectors have a **Forward to SIEM** option in the configuration UI. If a C2C connector lacks that option, you can achieve the same effect by assigning a field named `_siemforward`, set to *true*, to the connector.

For information about what C2C sources are available, see Cloud-to-Cloud Integration Framework. 1. **Use a Sumo Logic Source and parser**. If there isn’t a C2C connector for your data source, your next best option is to use a Sumo Logic Source (running on an Installed Collector or a Hosted Collector, depending on the data source)—and a Sumo Logic parser, if we have one for the data source.  - To check if there’s a parser for your data source, go to the **Manage Data > Logs > Parsers** page in the Sumo Logic UI. If there is a parser for your data source, but you find it doesn’t completely meet your needs–for instance if the parser doesn’t support the particular log format you use–consider customizing the parser with a [local configuration](/docs/cse/schema/parser-editor#create-a-local-configuration-for-a-system-parser). If that’s not practical, you can submit a request for a new parser by filing a ticket at [https://support.sumologic.com](https://support.sumologic.com/). + Check if there’s a parser for your data source. In the main Sumo Logic menu, select **Manage Data > Logs > Parsers**. If there is a parser for your data source, but you find it doesn’t completely meet your needs–for instance if the parser doesn’t support the particular log format you use–consider customizing the parser with a [local configuration](/docs/cse/schema/parser-editor#create-a-local-configuration-for-a-system-parser). If that’s not practical, you can submit a request for a new parser by filing a ticket at [https://support.sumologic.com](https://support.sumologic.com/). When you forward logs to Cloud SIEM for parser processing, there are two bits of important configuration:   @@ -52,13 +46,10 @@ We recommend the following ingestion processes, starting with the most preferred ::: 2. Configure the source with the path to the appropriate parser, by assigning a field named `_parser`, whose value is the path to parser, for example: - ``` _parser=/Parsers/System/AWS/AWS Network Firewall ``` - :::note  - You can get the path to a parser on the **Manage Data > Logs > Parsers** page in Sumo Logic. Click the three-dot kebab menu in the row for a parser, and select **Copy Path**. - ::: + You can get the path to a parser on the **Parsers** page in Sumo Logic. Click the three-dot kebab menu in the row for a parser, and select **Copy Path**. 1. **Use a Sumo Logic Source and Cloud SIEM Ingest mapping**. This is the least recommended method, as you have to manually configure the mapping hints in an ingestion mapping. For more information, see [Configure a Sumo Logic Ingest Mapping](/docs/cse/ingestion/sumo-logic-ingest-mapping/). diff --git a/docs/integrations/containers-orchestration/kubernetes.md b/docs/integrations/containers-orchestration/kubernetes.md index ac072a0de7..f3648062b9 100644 --- a/docs/integrations/containers-orchestration/kubernetes.md +++ b/docs/integrations/containers-orchestration/kubernetes.md @@ -113,8 +113,8 @@ For details on the individual alerts, see [Kubernetes Alerts](/docs/observabilit 1. Download the [JSON file](https://raw.githubusercontent.com/SumoLogic/terraform-sumologic-sumo-logic-monitor/main/monitor_packages/kubernetes/kubernetes.json) describing all the monitors. 2. The alerts should be restricted to specific clusters and/or namespaces to prevent the monitors hitting the cardinality limits. To limit the alerts, update the JSON file by replacing the text `$$kubernetes_data_source` with ``. For example: `cluster=k8s-prod.01`. -3. Go to **Manage Data > Alerts > Monitors**. -4. Click **Add Monitor**:
![add-monitor.png](/img/metrics/add-monitor.png) +3. In the main Sumo Logic menu, select **Manage Data > Monitoring > Monitors**. +4. Click **Add**. 5. Click **Import** to import monitors from the JSON above. :::note diff --git a/docs/integrations/containers-orchestration/rabbitmq.md b/docs/integrations/containers-orchestration/rabbitmq.md index 80cfa7ba9e..a35468371d 100644 --- a/docs/integrations/containers-orchestration/rabbitmq.md +++ b/docs/integrations/containers-orchestration/rabbitmq.md @@ -463,7 +463,7 @@ This section demonstrates how to install the RabbitMQ App. Version selection is not available for all apps. ::: 3. To install the app, complete the following fields. - 1. **App Name.** You can retain the existing name, or enter a name of your choice for the app.
 + 1. **App Name.** You can retain the existing name, or enter a name of your choice for the app. 2. **Data Source.** Choose **Enter a Custom Data Filter**, and enter a custom RabbitMQ cluster filter. Examples: 1. For all RabbitMQ clusters: `messaging_cluster=*` 2. For a specific cluster: `messaging_cluster=rabbitmq.dev.01` diff --git a/docs/integrations/databases/couchbase.md b/docs/integrations/databases/couchbase.md index 9db4720cd3..fe4a125cb5 100644 --- a/docs/integrations/databases/couchbase.md +++ b/docs/integrations/databases/couchbase.md @@ -200,8 +200,8 @@ This section explains the steps to collect Couchbase logs from a Kubernetes envi 5. Sumo Logic Kubernetes collection will automatically start collecting logs from the pods having the annotations defined above. 6. Verify logs in Sumo Logic. 3. **Add a FER to normalize the fields in Kubernetes environments**. This step is not needed if using application components solution terraform script. Labels created in Kubernetes environments automatically are prefixed with pod_labels. To normalize these for our app to work, we need to create a Field Extraction Rule if not already created for Proxy Application Components. To do so: - 1. Go to Manage Data > Logs > Field Extraction Rules. - 2. Click the + Add button on the top right of the table. + 1. In the main Sumo Logic menu, select **Manage Data > Logs > Field Extraction Rules**. + 2. Click the **+ Add Rule** button on the top right of the table. 3. The **Add Field Extraction Rule** form will appear: 4. Enter the following options: * **Rule Name**. Enter the name as **App Observability - Proxy**. @@ -401,7 +401,7 @@ There are limits to how many alerts can be enabled - see the [Alerts FAQ](/docs/ 1. For alerts applicable only to a specific cluster, your custom filter would be `'db_cluster=couchbase-standalone.01'`. 2. For alerts applicable to all cluster that start with couchbase-standalone, your custom filter would be,`db_cluster=couchbase-standalone*`. 3. For alerts applicable to a specific cluster within a production environment, your custom filter would be `db_cluster=couchbase-1` and `environment=standalone` (This assumes you have set the optional environment tag while configuring collection). -3. Go to Manage Data > Alerts > Monitors. +3. In the main Sumo Logic menu, select **Manage Data > Monitoring > Monitors**. 4. Click **Add**: 5. Click **Import** and then copy-paste the above JSON to import monitors. 6. The monitors are disabled by default. Once you have installed the alerts using this method, navigate to the Couchbase folder under **Monitors** to configure them. See [Monitor Settings](/docs/alerts/monitors/settings) to learn how to enable monitors to send notifications to teams or connections. See the instructions detailed in [Create a Monitor](/docs/alerts/monitors/create-monitor). @@ -485,7 +485,7 @@ Locate and install the app you need from the **App Catalog**. If you want to see Version selection is not available for all apps. ::: 3. To install the app, complete the following fields. - 1. **App Name.** You can retain the existing name, or enter a name of your choice for the app.
 + 1. **App Name.** You can retain the existing name, or enter a name of your choice for the app. 2. **Data Source.** * Choose **Enter a Custom Data Filter**, and enter a custom Couchbase cluster filter. Examples: 1. For all Couchbase clusters `db_cluster=*` diff --git a/docs/integrations/databases/mariadb.md b/docs/integrations/databases/mariadb.md index 48c06d4cb7..efad874a5f 100644 --- a/docs/integrations/databases/mariadb.md +++ b/docs/integrations/databases/mariadb.md @@ -448,7 +448,7 @@ Sumo Logic has provided out-of-the-box alerts available through [Sumo Logic moni * For alerts applicable only to a specific cluster, your custom filter would be `db_cluster=mariadb-prod.01`. * For alerts applicable to all clusters that start with Kafka-prod, your custom filter would be `db_cluster=mariadb-prod*`. * For alerts applicable to a specific cluster within a production environment, your custom filter would be `db_cluster=mariadb-1` and `environment=prod`. This assumes you have set the optional environment tag while configuring collection. -3. Go to Manage Data > Alerts > Monitors. +3. In the main Sumo Logic menu, select **Manage Data > Monitoring > Monitors**. 4. Click **Add**. 5. Click Import and then copy-paste the above JSON to import monitors. 6. The monitors are disabled by default. Once you have installed the alerts using this method, navigate to the MariaDB folder under **Monitors** to configure them. See [this](/docs/alerts/monitors) document to enable monitors to send notifications to teams or connections. See the instructions detailed in [Add a Monitor](/docs/alerts/monitors/create-monitor). @@ -526,7 +526,7 @@ Locate and install the app you need from the **App Catalog**. If you want to see Version selection is not available for all apps. ::: 3. To install the app, complete the following fields. - 1. **App Name.** You can retain the existing name, or enter a name of your choice for the app.
 + 1. **App Name.** You can retain the existing name, or enter a name of your choice for the app. 2. **Data Source.** * Choose **Enter a Custom Data Filter**, and enter a custom MariaDB cluster filter. Examples; 1. For all MariaDB clusters, `db_cluster=*`. diff --git a/docs/integrations/databases/memcached.md b/docs/integrations/databases/memcached.md index 9b51f5c9f9..576f5f3218 100644 --- a/docs/integrations/databases/memcached.md +++ b/docs/integrations/databases/memcached.md @@ -218,7 +218,7 @@ This section explains the steps to collect Memcached logs from a Kubernetes envi ``` 4. Sumo Logic Kubernetes collection will automatically start collecting logs from the pods having the annotations defined above. 3. **Add a FER to normalize the fields in Kubernetes environments**. This step is not needed if one is using application components solution terraform script. Labels created in Kubernetes environments automatically are prefixed with pod_labels. To normalize these for our app to work, we need to create a Field Extraction Rule if not already created for Proxy Application Components. To do so: - 1. Go to **Manage Data > Logs > Field Extraction Rules**. + 1. In the main Sumo Logic menu, select **Manage Data > Logs > Field Extraction Rules**. 2. Click the + Add button on the top right of the table. 3. The **Add Field Extraction Rule** form will appear: 4. Enter the following options: @@ -372,7 +372,7 @@ There are limits to how many alerts can be enabled. For more information, see [M * For alerts applicable only to a specific cluster, your custom filter would be: `db_cluster=dev-memcached-01` * For alerts applicable to all clusters that start with `memcached-prod`, your custom filter would be: `db_cluster=memcachedt-prod*` * For alerts applicable to specific clusters within a production environment, your custom filter would be: `db_cluster=dev-memcached-01` AND `environment=prod`. This assumes you have set the optional environment tag while configuring collection. -3. Go to **Manage Data > Alerts > Monitors**. +3. In the main Sumo Logic menu, select **Manage Data > Monitoring > Monitors**. 4. Click **Add**. 4. Click **Import**. 6. On the **Import Content popup**, enter **Memcached** in the Name field, paste the JSON into the popup, and click **Import**. diff --git a/docs/integrations/databases/mongodb.md b/docs/integrations/databases/mongodb.md index ca047d0541..d742891716 100644 --- a/docs/integrations/databases/mongodb.md +++ b/docs/integrations/databases/mongodb.md @@ -235,7 +235,7 @@ Pivoting to Tracing data from Entity Inspector is possible only for “MongoDB a ``` 5. Sumo Logic Kubernetes collection will automatically start collecting logs from the pods having the annotations defined above. 3. **Add an FER to normalize the fields in Kubernetes environments**. This step is not needed if one is using application components solution terraform script. Labels created in Kubernetes environments automatically are prefixed with `pod_labels`. To normalize these for our app to work, we need to create a Field Extraction Rule if not already created for Database Application Components. To do so: - 1. Go to **Manage Data > Logs > Field Extraction Rules**. + 1. In the main Sumo Logic menu, select **Manage Data > Logs > Field Extraction Rules**. 2. Click the + Add button on the top right of the table. 3. The **Add Field Extraction Rule** form will appear: 4. Enter the following options: @@ -427,7 +427,7 @@ There are limits to how many alerts can be enabled. For more information, see [M 1. Download the [JSON file](https://github.com/SumoLogic/terraform-sumologic-sumo-logic-monitor/blob/main/monitor_packages/MongoDB/MongoDB.json) that describes the monitors. 2. Replace `$$mongodb_data_source` with a custom source filter. To configure alerts for a specific database cluster, use a filter like `db_system=mongodb` or `db_cluster=dev-mongodb`. To configure the alerts for all of your clusters, set `$$mongodb_data_source` to blank (`""`). -3. Go to **Manage Data > Alerts > Monitors**. +3. In the main Sumo Logic menu, select **Manage Data > Monitoring > Monitors**. 4. Click **Add**. 5. Click **Import**. 6. On the **Import Content popup**, enter `MongoDB` in the Name field, paste in the JSON into the the popup, and click **Import**. diff --git a/docs/integrations/databases/mysql.md b/docs/integrations/databases/mysql.md index cdd76ee6c1..7b6cecf94d 100644 --- a/docs/integrations/databases/mysql.md +++ b/docs/integrations/databases/mysql.md @@ -346,7 +346,7 @@ Sumo Logic Kubernetes collection will automatically start collecting logs from t 2. **Add an FER to normalize the fields in Kubernetes environments**. This step is not needed if using application components solution terraform script. Labels created in Kubernetes environments are automatically prefixed with pod_labels. To normalize these for our app to work, we'll create a [Field Extraction Rule](/docs/manage/field-extractions/create-field-extraction-rule), Database Application Components, assuming it does not already exist: - 1. Go to **Manage Data > Logs > Field Extraction Rules**. + 1. In the main Sumo Logic menu, select **Manage Data > Logs > Field Extraction Rules**. 2. Click the **+ Add**. 3. The **Add Field Extraction** pane appears. 4. **Rule Name.** Enter "App Observability - Database". @@ -574,7 +574,7 @@ There are limits to how many alerts can be enabled. For more information, see [M 1. Download the [JSON file](https://github.com/SumoLogic/terraform-sumologic-sumo-logic-monitor/blob/main/monitor_packages/mysql/mysql.json) that describes the monitors. 2. Replace `$$mysql_data_source` with a custom source filter. To configure alerts for a specific database cluster, use a filter like `db_system=mysql` or `db_cluster=dev-mysql`. To configure the alerts for all of your clusters, set `$$mysql_data_source` to blank (`""`). -3. Go to **Manage Data > Alerts > Monitors**. +3. In the main Sumo Logic menu, select **Manage Data > Monitoring > Monitors**. 4. Click **Add**. 5. Click **Import.** 6. On the **Import Content popup**, enter "MySQL" in the Name field, paste in the JSON into the the popup, and click **Import**. diff --git a/docs/integrations/databases/redis.md b/docs/integrations/databases/redis.md index 3fa28f8c3d..997c42bc67 100644 --- a/docs/integrations/databases/redis.md +++ b/docs/integrations/databases/redis.md @@ -454,9 +454,9 @@ There are limits for how many alerts can be enabled - please see the [Alerts FAQ * For alerts applicable only to a specific cluster, your custom filter would be: `db_cluster=redis-.prod.01`. * For alerts applicable to all clusters that start with `redis-prod`, your custom filter would be: `db_cluster=redis-prod*`. * For alerts applicable to a specific cluster within a production environment, your custom filter would be: `db_cluster=redis-1 and environment=prod`. This assumes you have set the optional environment tag while configuring collection. -2. Go to Manage Data > Alerts > Monitors. +2. In the main Sumo Logic menu, select **Manage Data > Monitoring > Monitors**. 3. Click **Add**. -4. Click Import to import monitors from the JSON above. +4. Click **Import** to import monitors from the JSON above. :::note Monitors are disabled by default. Once you have installed the alerts via this method, navigate to the Redis folder under **Monitors** to configure them. See [Monitor Settings](/docs/alerts/monitors/settings/#edit-disable-more-actions) to enable monitors. To send notifications to teams or connections, see the instructions detailed in Step 4 of [Create a Monitor](/docs/alerts/monitors/create-monitor). @@ -539,8 +539,8 @@ This section demonstrates how to install the Redis ULM app. Version selection is not available for all apps. ::: 3. To install the app, complete the following fields. - 1. **App Name.** You can retain the existing name, or enter a name of your choice for the app.
 - 2. **Data Source.**
 Choose **Enter a Custom Data Filter** and enter a custom Redis cluster filter. Examples: + 1. **App Name.** You can retain the existing name, or enter a name of your choice for the app. + 2. **Data Source.** Choose **Enter a Custom Data Filter** and enter a custom Redis cluster filter. Examples: * For all Redis clusters: `db_cluster=*` * For a specific cluster: `db_cluster=redis.dev.01` * Clusters within a specific environment: `db_cluster=redis-1 and environment=prod`. (This assumes you have set the optional environment tag while configuring collection). diff --git a/docs/integrations/hosts-operating-systems/host-process-metrics.md b/docs/integrations/hosts-operating-systems/host-process-metrics.md index c50b85687a..ec7e40245c 100644 --- a/docs/integrations/hosts-operating-systems/host-process-metrics.md +++ b/docs/integrations/hosts-operating-systems/host-process-metrics.md @@ -190,9 +190,9 @@ There are limits to how many alerts can be enabled - please see the [Alerts FAQ] * For alerts applicable only to a specific cluster of hosts, your custom filter could be: `'_sourceCategory=yourclustername/metrics'`. * For alerts applicable to all hosts that start with ec2hosts-prod, your custom filter could be: `'_sourceCategory=ec2hosts-prod*/metrics'`. * For alerts applicable to a specific cluster within a production environment, your custom filter could be: `'_sourceCategory=prod/yourclustername/metrics'` -2. Go to Manage Data > Alerts > Monitors. -3. Click Add. -4. Click Import to import monitors from the JSON above. +2. In the main Sumo Logic menu, select **Manage Data > Monitoring > Monitors**. +3. Click **Add**. +4. Click **Import** to import monitors from the JSON above. The monitors are disabled by default. Once you have installed the alerts using this method, navigate to the Host and Process Metrics folder under Monitors to configure them. See [this](/docs/alerts/monitors/settings) document to enable monitors, to configure each monitor, to send notifications to teams or connections, see the instructions detailed in [Create a Monitor](/docs/alerts/monitors/create-monitor). diff --git a/docs/integrations/microsoft-azure/sql-server.md b/docs/integrations/microsoft-azure/sql-server.md index 0c14a56f00..da6f134e8c 100644 --- a/docs/integrations/microsoft-azure/sql-server.md +++ b/docs/integrations/microsoft-azure/sql-server.md @@ -227,8 +227,8 @@ kubectl describe pod 2. Sumo Logic Kubernetes collection will automatically start collecting logs from the pods having the annotations defined above. 3. Verify logs in Sumo Logic. 4. Add a FER to normalize the fields in Kubernetes environments. Labels created in Kubernetes environments automatically are prefixed with pod_labels. To normalize these for our app to work, we need to create a Field Extraction Rule if not already created for Proxy Application Components. To do so: - 1. Go to Manage Data > Logs > Field Extraction Rules. - 2. Click the + Add button on the top right of the table. + 1. In the main Sumo Logic menu, select **Manage Data > Logs > Field Extraction Rules**. + 2. Click the **+ Add Rule** button on the top right of the table. 3. The **Add Field Extraction Rule** form will appear. 4. Enter the following options: * **Rule Name**. Enter the name as **App Observability - Proxy**. @@ -439,9 +439,9 @@ Custom filter examples: 1. For alerts applicable only to a specific cluster, your custom filter would be: ‘`db_cluster=sqlserver-prod.01`‘ 2. For alerts applicable to all clusters that start with Kafka-prod, your custom filter would be: `db_cluster=sql-prod*` 3. For alerts applicable to a specific cluster within a production environment, your custom filter would be: `db_cluster=sql-1 `AND `environment=prod `(This assumes you have set the optional environment tag while configuring collection) -4. Go to Manage Data > Alerts > Monitors. +4. In the main Sumo Logic menu, select **Manage Data > Monitoring > Monitors**. 5. Click **Add**: -6. Click Import, then copy paste the above JSON to import monitors. +6. Click **Import**, then copy paste the above JSON to import monitors. The monitors are disabled by default. Once you have installed the alerts using this method, navigate to the MySQL folder under **Monitors** to configure them. See [this](/docs/alerts/monitors) document to enable monitors to send notifications to teams or connections. Please see the instructions detailed in Step 4 of this [document](/docs/alerts/monitors/create-monitor). diff --git a/docs/integrations/web-servers/iis-10.md b/docs/integrations/web-servers/iis-10.md index a63e2b0979..5106b5bf7c 100644 --- a/docs/integrations/web-servers/iis-10.md +++ b/docs/integrations/web-servers/iis-10.md @@ -526,7 +526,7 @@ Locate and install the app you need from the **App Catalog**. If you want to see Version selection is not available for all apps. ::: 3. To install the app, complete the following fields. - 1. **App Name**. You can retain the existing name, or enter a name of your choice for the app.
 + 1. **App Name**. You can retain the existing name, or enter a name of your choice for the app. 2. **Data Source**. Choose **Enter a Custom Data Filter**, and enter a custom IIS Server farm filter. Examples: * For all IIS Server farms, `webserver_farm=*`. * For a specific farm, `webserver_farm=iis.dev.01`. diff --git a/docs/integrations/web-servers/nginx-ingress.md b/docs/integrations/web-servers/nginx-ingress.md index 989c576e91..85ffa12ad0 100644 --- a/docs/integrations/web-servers/nginx-ingress.md +++ b/docs/integrations/web-servers/nginx-ingress.md @@ -106,7 +106,7 @@ There are limits to how many alerts can be enabled - for details, see the [Alert * For alerts applicable only to a specific farm, your custom filter would be: `webserver_farm=nginx-ingress.01` * For alerts applicable to all farms that start with `nginx-ingress`, your custom filter would be: `webserver_system=nginx-ingress*` * For alerts applicable to a specific farm within a production environment, your custom filter would be: `webserver_farm=nginx-ingress-1` AND `environment=dev` (This assumes you have set the optional environment tag while configuring collection) -3. Go to Manage Data > Alerts > Monitors. +3. In the main Sumo Logic menu, select **Manage Data > Monitoring > Monitors**. 4. Click **Add**. 5. Click Import and then copy-paste the above JSON to import monitors. @@ -189,7 +189,7 @@ Locate and install the app you need from the **App Catalog**. If you want to see 1. From the **App Catalog**, search for and select the app. 2. Select the version of the service you're using and click **Add to Library**. 3. To install the app, complete the following fields. - 1. **App Name.** You can retain the existing name, or enter a name of your choice for the app.
 + 1. **App Name.** You can retain the existing name, or enter a name of your choice for the app. 2. **Data Source.** 3. Choose **Enter a Custom Data Filter**, and enter a custom Nginx Ingress farm filter. Examples: 1. For all Nginx Ingress farms: `webserver_farm=*`. diff --git a/docs/integrations/web-servers/nginx-plus-ingress.md b/docs/integrations/web-servers/nginx-plus-ingress.md index 9d31d776b0..13d3e38176 100644 --- a/docs/integrations/web-servers/nginx-plus-ingress.md +++ b/docs/integrations/web-servers/nginx-plus-ingress.md @@ -125,7 +125,7 @@ Alerts can be installed by either importing them via a JSON or via a Terraform s 1. Download [this JSON file](https://github.com/SumoLogic/terraform-sumologic-sumo-logic-monitor/blob/main/monitor_packages/nginx-plus-ingress/nginxplusingress.json) describing all the monitors. 2. Replace **$$logs_data_source** with logs data source. * For example, `_sourceCategory=Labs/NginxIngress/Logs` -3. Go to Manage Data > Alerts > Monitors. +3. In the main Sumo Logic menu, select **Manage Data > Monitoring > Monitors**. 4. Click **Add**. 1. Click **Import** to import monitors from the JSON above. diff --git a/docs/integrations/web-servers/nginx-plus.md b/docs/integrations/web-servers/nginx-plus.md index cf3749dbbd..d404d96710 100644 --- a/docs/integrations/web-servers/nginx-plus.md +++ b/docs/integrations/web-servers/nginx-plus.md @@ -300,7 +300,7 @@ Alerts can be installed by either importing them via a JSON or via a Terraform s 1. Download the [JSON file](https://github.com/SumoLogic/terraform-sumologic-sumo-logic-monitor/blob/main/monitor_packages/nginx-plus/nginxplus.json) describing all the monitors. 2. Replace **$$logs_data_source** and **$$metric_data_source** with logs and metrics data sources respectively. For example, `_sourceCategory=Labs/Nginx/Plus/Logs`. -3. Go to Manage Data > Alerts > Monitors. +3. In the main Sumo Logic menu, select **Manage Data > Monitoring > Monitors**. 4. Click **Add**. 5. Click **Import** to import monitors from the JSON above. diff --git a/docs/integrations/web-servers/nginx.md b/docs/integrations/web-servers/nginx.md index 75042e28be..aa7ee91f5f 100644 --- a/docs/integrations/web-servers/nginx.md +++ b/docs/integrations/web-servers/nginx.md @@ -382,7 +382,7 @@ To view the full list, see [Nginx](#nginx-alerts). There are limits to how many * For alerts applicable only to a specific farm, your custom filter would be `webserver_farm=nginx-standalone.01`. * For alerts applicable to all farms that start with nginx-standalone, your custom filter would be `webserver_system=nginx-standalone*`. * For alerts applicable to a specific farm within a production environment, your custom filter would be,`webserver_farm=nginx-1` and `environment=standalone`. This assumes you have set the optional environment tag while configuring collection. -3. Go to Manage Data > Alerts > Monitors. +3. In the main Sumo Logic menu, select **Manage Data > Monitoring > Monitors**. 4. Click **Add**. 5. Click Import and then copy-paste the above JSON to import monitors. @@ -466,7 +466,7 @@ This section demonstrates how to install the Nginx app. Version selection is not available for all apps. ::: 3. To install the app, complete the following fields. - 1. **App Name.** You can retain the existing name, or enter a name of your choice for the app.
 + 1. **App Name.** You can retain the existing name, or enter a name of your choice for the app. 2. **Data Source.** Choose **Enter a Custom Data Filter**, and enter a custom Nginx farm filter. Examples: 1. For all Nginx farms, `webserver_farm=*`. 2. For a specific farm, `webserver_farm=nginx.dev.01`. diff --git a/docs/send-data/collect-from-other-data-sources/collect-ruby-on-rails-logs.md b/docs/send-data/collect-from-other-data-sources/collect-ruby-on-rails-logs.md index 8bac397f1e..e32a0aa5d8 100644 --- a/docs/send-data/collect-from-other-data-sources/collect-ruby-on-rails-logs.md +++ b/docs/send-data/collect-from-other-data-sources/collect-ruby-on-rails-logs.md @@ -32,7 +32,10 @@ and the multiline setup. 1. Click **Save**.  -1. In Sumo Logic, go to **Manage Data > Collection > Status** to verify that the logs are being ingested. If you do not see any data coming in after 2-3 minutes, check that your file path is correct, that the Sumo Logic Collector has read access to the logs, and that your time zone is configured correctly. +1. Verify that the logs are being ingested. In the main Sumo Logic menu, select **Manage Data > Collection > Status**. + + +1. If you do not see any data coming in after 2-3 minutes, check that your file path is correct, that the Sumo Logic Collector has read access to the logs, and that your time zone is configured correctly. ## Parsing RoR Logs From e9e898977da9800149851e7ee242f66d234a6df7 Mon Sep 17 00:00:00 2001 From: "John Pipkin (Sumo Logic)" Date: Fri, 26 Jul 2024 09:10:47 -0500 Subject: [PATCH 2/2] Update docs/send-data/collect-from-other-data-sources/collect-ruby-on-rails-logs.md Co-authored-by: Kim (Sumo Logic) <56411016+kimsauce@users.noreply.github.com> --- .../collect-ruby-on-rails-logs.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/docs/send-data/collect-from-other-data-sources/collect-ruby-on-rails-logs.md b/docs/send-data/collect-from-other-data-sources/collect-ruby-on-rails-logs.md index e32a0aa5d8..626e956f5b 100644 --- a/docs/send-data/collect-from-other-data-sources/collect-ruby-on-rails-logs.md +++ b/docs/send-data/collect-from-other-data-sources/collect-ruby-on-rails-logs.md @@ -33,8 +33,6 @@ and the multiline setup. 1. Click **Save**.  1. Verify that the logs are being ingested. In the main Sumo Logic menu, select **Manage Data > Collection > Status**. - - 1. If you do not see any data coming in after 2-3 minutes, check that your file path is correct, that the Sumo Logic Collector has read access to the logs, and that your time zone is configured correctly. ## Parsing RoR Logs