Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/alerts/webhook-connections/pagerduty.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,8 +104,8 @@ The URL and supported payload are different based on the version of the PagerDut

### Events API v1

1. Go to **Manage Data > Alerts > Connections**.
1. On the Connections page, click **Add**.
1. <!--Kanso [**Classic UI**](/docs/get-started/sumo-logic-ui/). Kanso--> In the main Sumo Logic menu, select **Manage Data > Monitoring > Connections**. <!--Kanso <br/>[**New UI**](/docs/get-started/sumo-logic-ui-new/). In the top menu select **Configuration**, and then under **Monitoring** select **Connections**. You can also click the **Go To...** menu at the top of the screen and select **Connections**. Kanso-->
1. On the Connections page, click **+**.
1. Click **PagerDuty**.
1. In the Create Connection dialog, enter the name of the Connection.
1. (Optional) Enter a **Description** for the Connection.
Expand Down
16 changes: 8 additions & 8 deletions docs/contributing/style-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -1410,14 +1410,14 @@ See the following tabbed code examples:

<TabItem value="kinesis">

Setup a Source in Sumo Logic:
Set up a Source in Sumo Logic:

Navigate to Collection management (Manage Data > Collection)
Use an existing Hosted Collector, or create a new one.
Next to the collector, select Add Source.
Select AWS Kinesis Firehose for Logs
Enter a Name to identify the source.
Enter a Source Category following the best practices found in “Good Source Category, Bad Source Category”
1. Navigate to Collection management.
1. Use an existing Hosted Collector, or create a new one.
1. Next to the collector, select **Add Source**.
1. Select **AWS Kinesis Firehose for Logs**.
1. Enter a **Name** to identify the source.
1. Enter a **Source Category** following the best practices found in “Good Source Category, Bad Source Category”.

Deploy the Cloudformation Template to Create a Kinesis Firehose Delivery Stream:

Expand All @@ -1426,7 +1426,7 @@ Deploy the Cloudformation Template to Create a Kinesis Firehose Delivery Stream:
1. Create a new stack using the CloudFormation template you downloaded.
1. Provide the URL you created from your Sumo source.
1. Select an S3 bucket to deliver failed logs, or create a new one.
1. Click next.
1. Click **Next**.

Accept the IAM permissions, and create the stack.

Expand Down
17 changes: 4 additions & 13 deletions docs/cse/ingestion/cse-ingestion-best-practices.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,17 +27,11 @@ You can only send log data that resides in the [Continuous data tier](/docs/mana

We recommend the following ingestion processes, starting with the most preferred:

1. **Follow an ingestion guide**. The [Ingestion Guides](/docs/cse/ingestion) section of this help site provides specific collection and ingestion recommendations for many common products and services. An ingestion guide describes the easiest way to get data from a particular product into Cloud SIEM. When you’re ready to start using Cloud SIEM to monitor a new product, if there’s a Cloud SIEM ingestion guide for it, we recommend using it. 

1. **Use a Cloud-to-Cloud (C2C) connector**. If you don’t see an Ingestion Guide for your data source, check to see if there is a C2C connector. It’s an easy method, because if you configure your C2C source to send logs to Cloud SIEM, it automatically tags messages it sends to Cloud SIEM with fields that contain the mapping hints that Cloud SIEM requires. 

Most C2C connectors have a **Forward to SIEM** option in the configuration UI. If a C2C connector lacks that option, you can achieve the same effect by assigning a field named `_siemforward`, set to *true*, to the connector.

For information about what C2C sources are available, see Cloud-to-Cloud Integration Framework.

1. **Follow an ingestion guide**. The [Ingestion Guides](/docs/cse/ingestion) section of this help site provides specific collection and ingestion recommendations for many common products and services. An ingestion guide describes the easiest way to get data from a particular product into Cloud SIEM. When you’re ready to start using Cloud SIEM to monitor a new product, if there’s a Cloud SIEM ingestion guide for it, we recommend using it.
1. **Use a Cloud-to-Cloud (C2C) connector**. If you don’t see an Ingestion Guide for your data source, check to see if there is a C2C connector. It’s an easy method, because if you configure your C2C source to send logs to Cloud SIEM, it automatically tags messages it sends to Cloud SIEM with fields that contain the mapping hints that Cloud SIEM requires.  <br/><br/>Most C2C connectors have a **Forward to SIEM** option in the configuration UI. If a C2C connector lacks that option, you can achieve the same effect by assigning a field named `_siemforward`, set to *true*, to the connector. <br/><br/>For information about what C2C sources are available, see Cloud-to-Cloud Integration Framework.
1. **Use a Sumo Logic Source and parser**. If there isn’t a C2C connector for your data source, your next best option is to use a Sumo Logic Source (running on an Installed Collector or a Hosted Collector, depending on the data source)—and a Sumo Logic parser, if we have one for the data source. 

To check if there’s a parser for your data source, go to the **Manage Data > Logs > Parsers** page in the Sumo Logic UI. If there is a parser for your data source, but you find it doesn’t completely meet your needs–for instance if the parser doesn’t support the particular log format you use–consider customizing the parser with a [local configuration](/docs/cse/schema/parser-editor#create-a-local-configuration-for-a-system-parser). If that’s not practical, you can submit a request for a new parser by filing a ticket at [https://support.sumologic.com](https://support.sumologic.com/).
Check if there’s a parser for your data source. <!--Kanso [**Classic UI**](/docs/get-started/sumo-logic-ui/). Kanso--> In the main Sumo Logic menu, select **Manage Data > Logs > Parsers**. <!--Kanso <br/>[**New UI**](/docs/get-started/sumo-logic-ui-new/). In the top menu select **Configuration**, and then under **Logs** select **Parsers**. You can also click the **Go To...** menu at the top of the screen and select **Parsers**. Kanso-->If there is a parser for your data source, but you find it doesn’t completely meet your needs–for instance if the parser doesn’t support the particular log format you use–consider customizing the parser with a [local configuration](/docs/cse/schema/parser-editor#create-a-local-configuration-for-a-system-parser). If that’s not practical, you can submit a request for a new parser by filing a ticket at [https://support.sumologic.com](https://support.sumologic.com/).

When you forward logs to Cloud SIEM for parser processing, there are two bits of important configuration:

Expand All @@ -52,13 +46,10 @@ We recommend the following ingestion processes, starting with the most preferred
:::

2. Configure the source with the path to the appropriate parser, by assigning a field named `_parser`, whose value is the path to parser, for example:

```
_parser=/Parsers/System/AWS/AWS Network Firewall
```

:::note 
You can get the path to a parser on the **Manage Data > Logs > Parsers** page in Sumo Logic. Click the three-dot kebab menu in the row for a parser, and select **Copy Path**.
:::
You can get the path to a parser on the **Parsers** page in Sumo Logic. Click the three-dot kebab menu in the row for a parser, and select **Copy Path**.

1. **Use a Sumo Logic Source and Cloud SIEM Ingest mapping**. This is the least recommended method, as you have to manually configure the mapping hints in an ingestion mapping. For more information, see [Configure a Sumo Logic Ingest Mapping](/docs/cse/ingestion/sumo-logic-ingest-mapping/).
4 changes: 2 additions & 2 deletions docs/integrations/containers-orchestration/kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,8 +113,8 @@ For details on the individual alerts, see [Kubernetes Alerts](/docs/observabilit

1. Download the [JSON file](https://raw.githubusercontent.com/SumoLogic/terraform-sumologic-sumo-logic-monitor/main/monitor_packages/kubernetes/kubernetes.json) describing all the monitors.
2. The alerts should be restricted to specific clusters and/or namespaces to prevent the monitors hitting the cardinality limits. To limit the alerts, update the JSON file by replacing the text `$$kubernetes_data_source` with `<Your Custom Filter>`. For example: `cluster=k8s-prod.01`.
3. Go to **Manage Data > Alerts > Monitors**.
4. Click **Add Monitor**:<br/> ![add-monitor.png](/img/metrics/add-monitor.png)
3. <!--Kanso [**Classic UI**](/docs/get-started/sumo-logic-ui/). Kanso--> In the main Sumo Logic menu, select **Manage Data > Monitoring > Monitors**. <!--Kanso <br/>[**New UI**](/docs/get-started/sumo-logic-ui-new/). In the main Sumo Logic menu, select **Alerts > Monitors**. You can also click the **Go To...** menu at the top of the screen and select **Monitors**. Kanso-->
4. Click **Add**.
5. Click **Import** to import monitors from the JSON above.

:::note
Expand Down
2 changes: 1 addition & 1 deletion docs/integrations/containers-orchestration/rabbitmq.md
Original file line number Diff line number Diff line change
Expand Up @@ -463,7 +463,7 @@ This section demonstrates how to install the RabbitMQ App.
Version selection is not available for all apps.
:::
3. To install the app, complete the following fields.
1. **App Name.** You can retain the existing name, or enter a name of your choice for the app.
1. **App Name.** You can retain the existing name, or enter a name of your choice for the app.
2. **Data Source.** Choose **Enter a Custom Data Filter**, and enter a custom RabbitMQ cluster filter. Examples:
1. For all RabbitMQ clusters: `messaging_cluster=*`
2. For a specific cluster: `messaging_cluster=rabbitmq.dev.01`
Expand Down
8 changes: 4 additions & 4 deletions docs/integrations/databases/couchbase.md
Original file line number Diff line number Diff line change
Expand Up @@ -200,8 +200,8 @@ This section explains the steps to collect Couchbase logs from a Kubernetes envi
5. Sumo Logic Kubernetes collection will automatically start collecting logs from the pods having the annotations defined above.
6. Verify logs in Sumo Logic.
3. **Add a FER to normalize the fields in Kubernetes environments**. This step is not needed if using application components solution terraform script. Labels created in Kubernetes environments automatically are prefixed with pod_labels. To normalize these for our app to work, we need to create a Field Extraction Rule if not already created for Proxy Application Components. To do so:
1. Go to Manage Data > Logs > Field Extraction Rules.
2. Click the + Add button on the top right of the table.
1. <!--Kanso [**Classic UI**](/docs/get-started/sumo-logic-ui/). Kanso--> In the main Sumo Logic menu, select **Manage Data > Logs > Field Extraction Rules**. <!--Kanso <br/>[**New UI**](/docs/get-started/sumo-logic-ui-new/). In the top menu select **Configuration**, and then under **Logs** select **Field Extraction Rules**. You can also click the **Go To...** menu at the top of the screen and select **Field Extraction Rules**. Kanso-->
2. Click the **+ Add Rule** button on the top right of the table.
3. The **Add Field Extraction Rule** form will appear:
4. Enter the following options:
* **Rule Name**. Enter the name as **App Observability - Proxy**.
Expand Down Expand Up @@ -401,7 +401,7 @@ There are limits to how many alerts can be enabled - see the [Alerts FAQ](/docs/
1. For alerts applicable only to a specific cluster, your custom filter would be `'db_cluster=couchbase-standalone.01'`.
2. For alerts applicable to all cluster that start with couchbase-standalone, your custom filter would be,`db_cluster=couchbase-standalone*`.
3. For alerts applicable to a specific cluster within a production environment, your custom filter would be `db_cluster=couchbase-1` and `environment=standalone` (This assumes you have set the optional environment tag while configuring collection).
3. Go to Manage Data > Alerts > Monitors.
3. <!--Kanso [**Classic UI**](/docs/get-started/sumo-logic-ui/). Kanso--> In the main Sumo Logic menu, select **Manage Data > Monitoring > Monitors**. <!--Kanso <br/>[**New UI**](/docs/get-started/sumo-logic-ui-new/). In the main Sumo Logic menu, select **Alerts > Monitors**. You can also click the **Go To...** menu at the top of the screen and select **Monitors**. Kanso-->
4. Click **Add**:
5. Click **Import** and then copy-paste the above JSON to import monitors.
6. The monitors are disabled by default. Once you have installed the alerts using this method, navigate to the Couchbase folder under **Monitors** to configure them. See [Monitor Settings](/docs/alerts/monitors/settings) to learn how to enable monitors to send notifications to teams or connections. See the instructions detailed in [Create a Monitor](/docs/alerts/monitors/create-monitor).
Expand Down Expand Up @@ -485,7 +485,7 @@ Locate and install the app you need from the **App Catalog**. If you want to see
Version selection is not available for all apps.
:::
3. To install the app, complete the following fields.
1. **App Name.** You can retain the existing name, or enter a name of your choice for the app.
1. **App Name.** You can retain the existing name, or enter a name of your choice for the app.
2. **Data Source.**
* Choose **Enter a Custom Data Filter**, and enter a custom Couchbase cluster filter. Examples:
1. For all Couchbase clusters `db_cluster=*`
Expand Down
4 changes: 2 additions & 2 deletions docs/integrations/databases/mariadb.md
Original file line number Diff line number Diff line change
Expand Up @@ -448,7 +448,7 @@ Sumo Logic has provided out-of-the-box alerts available through [Sumo Logic moni
* For alerts applicable only to a specific cluster, your custom filter would be `db_cluster=mariadb-prod.01`.
* For alerts applicable to all clusters that start with Kafka-prod, your custom filter would be `db_cluster=mariadb-prod*`.
* For alerts applicable to a specific cluster within a production environment, your custom filter would be `db_cluster=mariadb-1` and `environment=prod`. This assumes you have set the optional environment tag while configuring collection.
3. Go to Manage Data > Alerts > Monitors.
3. <!--Kanso [**Classic UI**](/docs/get-started/sumo-logic-ui/). Kanso--> In the main Sumo Logic menu, select **Manage Data > Monitoring > Monitors**. <!--Kanso <br/>[**New UI**](/docs/get-started/sumo-logic-ui-new/). In the main Sumo Logic menu, select **Alerts > Monitors**. You can also click the **Go To...** menu at the top of the screen and select **Monitors**. Kanso-->
4. Click **Add**.
5. Click Import and then copy-paste the above JSON to import monitors.
6. The monitors are disabled by default. Once you have installed the alerts using this method, navigate to the MariaDB folder under **Monitors** to configure them. See [this](/docs/alerts/monitors) document to enable monitors to send notifications to teams or connections. See the instructions detailed in [Add a Monitor](/docs/alerts/monitors/create-monitor).
Expand Down Expand Up @@ -526,7 +526,7 @@ Locate and install the app you need from the **App Catalog**. If you want to see
Version selection is not available for all apps.
:::
3. To install the app, complete the following fields.
1. **App Name.** You can retain the existing name, or enter a name of your choice for the app.
1. **App Name.** You can retain the existing name, or enter a name of your choice for the app.
2. **Data Source.**
* Choose **Enter a Custom Data Filter**, and enter a custom MariaDB cluster filter. Examples;
1. For all MariaDB clusters, `db_cluster=*`.
Expand Down
4 changes: 2 additions & 2 deletions docs/integrations/databases/memcached.md
Original file line number Diff line number Diff line change
Expand Up @@ -218,7 +218,7 @@ This section explains the steps to collect Memcached logs from a Kubernetes envi
```
4. Sumo Logic Kubernetes collection will automatically start collecting logs from the pods having the annotations defined above.
3. **Add a FER to normalize the fields in Kubernetes environments**. This step is not needed if one is using application components solution terraform script. Labels created in Kubernetes environments automatically are prefixed with pod_labels. To normalize these for our app to work, we need to create a Field Extraction Rule if not already created for Proxy Application Components. To do so:
1. Go to **Manage Data > Logs > Field Extraction Rules**.
1. <!--Kanso [**Classic UI**](/docs/get-started/sumo-logic-ui/). Kanso--> In the main Sumo Logic menu, select **Manage Data > Logs > Field Extraction Rules**. <!--Kanso <br/>[**New UI**](/docs/get-started/sumo-logic-ui-new/). In the top menu select **Configuration**, and then under **Logs** select **Field Extraction Rules**. You can also click the **Go To...** menu at the top of the screen and select **Field Extraction Rules**. Kanso-->
2. Click the + Add button on the top right of the table.
3. The **Add Field Extraction Rule** form will appear:
4. Enter the following options:
Expand Down Expand Up @@ -372,7 +372,7 @@ There are limits to how many alerts can be enabled. For more information, see [M
* For alerts applicable only to a specific cluster, your custom filter would be: `db_cluster=dev-memcached-01`
* For alerts applicable to all clusters that start with `memcached-prod`, your custom filter would be: `db_cluster=memcachedt-prod*`
* For alerts applicable to specific clusters within a production environment, your custom filter would be: `db_cluster=dev-memcached-01` AND `environment=prod`. This assumes you have set the optional environment tag while configuring collection.
3. Go to **Manage Data > Alerts > Monitors**.
3. <!--Kanso [**Classic UI**](/docs/get-started/sumo-logic-ui/). Kanso--> In the main Sumo Logic menu, select **Manage Data > Monitoring > Monitors**. <!--Kanso <br/>[**New UI**](/docs/get-started/sumo-logic-ui-new/). In the main Sumo Logic menu, select **Alerts > Monitors**. You can also click the **Go To...** menu at the top of the screen and select **Monitors**. Kanso-->
4. Click **Add**.
4. Click **Import**.
6. On the **Import Content popup**, enter **Memcached** in the Name field, paste the JSON into the popup, and click **Import**.
Expand Down
4 changes: 2 additions & 2 deletions docs/integrations/databases/mongodb.md
Original file line number Diff line number Diff line change
Expand Up @@ -235,7 +235,7 @@ Pivoting to Tracing data from Entity Inspector is possible only for “MongoDB a
```
5. Sumo Logic Kubernetes collection will automatically start collecting logs from the pods having the annotations defined above.
3. **Add an FER to normalize the fields in Kubernetes environments**. This step is not needed if one is using application components solution terraform script. Labels created in Kubernetes environments automatically are prefixed with `pod_labels`. To normalize these for our app to work, we need to create a Field Extraction Rule if not already created for Database Application Components. To do so:
1. Go to **Manage Data > Logs > Field Extraction Rules**.
1. <!--Kanso [**Classic UI**](/docs/get-started/sumo-logic-ui/). Kanso--> In the main Sumo Logic menu, select **Manage Data > Logs > Field Extraction Rules**. <!--Kanso <br/>[**New UI**](/docs/get-started/sumo-logic-ui-new/). In the top menu select **Configuration**, and then under **Logs** select **Field Extraction Rules**. You can also click the **Go To...** menu at the top of the screen and select **Field Extraction Rules**. Kanso-->
2. Click the + Add button on the top right of the table.
3. The **Add Field Extraction Rule** form will appear:
4. Enter the following options:
Expand Down Expand Up @@ -427,7 +427,7 @@ There are limits to how many alerts can be enabled. For more information, see [M

1. Download the [JSON file](https://github.com/SumoLogic/terraform-sumologic-sumo-logic-monitor/blob/main/monitor_packages/MongoDB/MongoDB.json) that describes the monitors.
2. Replace `$$mongodb_data_source` with a custom source filter. To configure alerts for a specific database cluster, use a filter like `db_system=mongodb` or `db_cluster=dev-mongodb`. To configure the alerts for all of your clusters, set `$$mongodb_data_source` to blank (`""`).
3. Go to **Manage Data > Alerts > Monitors**.
3. <!--Kanso [**Classic UI**](/docs/get-started/sumo-logic-ui/). Kanso--> In the main Sumo Logic menu, select **Manage Data > Monitoring > Monitors**. <!--Kanso <br/>[**New UI**](/docs/get-started/sumo-logic-ui-new/). In the main Sumo Logic menu, select **Alerts > Monitors**. You can also click the **Go To...** menu at the top of the screen and select **Monitors**. Kanso-->
4. Click **Add**.
5. Click **Import**.
6. On the **Import Content popup**, enter `MongoDB` in the Name field, paste in the JSON into the the popup, and click **Import**.
Expand Down
Loading