diff --git a/docs/integrations/microsoft-azure/sql-server.md b/docs/integrations/microsoft-azure/sql-server.md index 85c082de31..f884915632 100644 --- a/docs/integrations/microsoft-azure/sql-server.md +++ b/docs/integrations/microsoft-azure/sql-server.md @@ -132,14 +132,14 @@ In Kubernetes environments, we use the Telegraf Operator, which is packaged with db_cluster_port = "ENV_TO_BE_CHANGED" ``` Enter in values for the following parameters (marked `ENV_TO_BE_CHANGED` in the snippet above): - * `telegraf.influxdata.com/inputs`. This contains the required configuration for the Telegraf SQL Server Input plugin. Please refer[ to this doc](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/redis) for more information on configuring the SQL Server input plugin for Telegraf. Note: As telegraf will be run as a sidecar the host should always be localhost. + * `telegraf.influxdata.com/inputs`. This contains the required configuration for the Telegraf SQL Server Input plugin. Refer [to this doc](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/redis) for more information on configuring the SQL Server input plugin for Telegraf. Note: As telegraf will be run as a sidecar the host should always be localhost. * In the input plugins section, which is `[[inputs.sqlserver]]`: - * `servers` - The URL to the SQLserver server. This can be a comma-separated list to connect to multiple SQLserver servers. Please see [this doc](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/sqlserver) for more information on additional parameters for configuring the SQLserver input plugin for Telegraf. + * **servers**. The URL to the SQLserver server. This can be a comma-separated list to connect to multiple SQLserver servers. Refer [to this doc](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/sqlserver) for more information on additional parameters for configuring the SQLserver input plugin for Telegraf. * In the tags section, which is `[inputs.sqlserver.tags]` - * `environment`. This is the deployment environment where the SQLserver cluster identified by the value of **servers** resides. For example: dev, prod or qa. While this value is optional we highly recommend setting it. - * `db_cluster`. Enter a name to identify this SQLserver cluster. This cluster name will be shown in the Sumo Logic dashboards. - * `db_cluster_address` - Enter the cluster hostname or ip address that is used by the application to connect to the database. It could also be the load balancer or proxy endpoint. - * `db_cluster_port` - Enter the database port. If not provided, a default port will be used. + * **environment**. This is the deployment environment where the SQLserver cluster identified by the value of **servers** resides. For example: dev, prod, or qa. While this value is optional we highly recommend setting it. + * **db_cluster**. Enter a name to identify this SQLserver cluster. This cluster name will be shown in the Sumo Logic dashboards. + * **db_cluster_address**. Enter the cluster hostname or ip address that is used by the application to connect to the database. It could also be the load balancer or proxy endpoint. + * **db_cluster_port**. Enter the database port. If not provided, a default port will be used. :::note `db_cluster_address` and `db_cluster_port` should reflect exact configuration of DB client configuration in your application, especially if you instrument it with OT tracing. The values of these fields should match exactly the connection string used by the database client (reported as values for `net.peer.name` and `net.peer.port` metadata fields). @@ -156,8 +156,8 @@ In Kubernetes environments, we use the Telegraf Operator, which is packaged with * `prometheus.io/scrape: "true"`. This ensures our Prometheus will scrape the metrics. * `prometheus.io/port: "9273"`. This tells prometheus what ports to scrape on. This should not be changed. * `telegraf.influxdata.com/inputs`. In the tags section, which is `[inputs.sqlserver.tags]`: - * `component: “database”` - This value is used by Sumo Logic apps to identify application components. - * `db_system: “sqlserver”` - This value identifies the database system. + * `component: “database”`. This value is used by Sumo Logic apps to identify application components. + * `db_system: “sqlserver”`. This value identifies the database system. For all other parameters, see [this doc](/docs/send-data/collect-from-other-data-sources/collect-metrics-telegraf/install-telegraf#configuring-telegraf) for more parameters that can be configured in the Telegraf agent globally. @@ -184,10 +184,10 @@ This section explains the steps to collect SQL Server logs from a Kubernetes env **Do not modify** the following values set by this configuration as it will cause the Sumo Logic app to not function correctly. - * `component: “database”` - This value is used by Sumo Logic apps to identify application components. - * `db_system: “SQLserver”` - This value identifies the database system. - * `db_cluster_address` - Enter the cluster hostname or ip address that is used by the application to connect to the database. It could also be the load balancer or proxy endpoint. - * `db_cluster_port` - Enter the database port. If not provided, a default port will be used. + * `component: “database”`. This value is used by Sumo Logic apps to identify application components. + * `db_system: “SQLserver”`. This value identifies the database system. + * `db_cluster_address`. Enter the cluster hostname or ip address that is used by the application to connect to the database. It could also be the load balancer or proxy endpoint. + * `db_cluster_port`. Enter the database port. If not provided, a default port will be used. :::note `db_cluster_address` and `db_cluster_port` should reflect the exact configuration of DB client configuration in your application, especially if you instrument it with OT tracing. The values of these fields should match exactly the connection string used by the database client (reported as values for net.peer.name and net.peer.port metadata fields). @@ -207,23 +207,22 @@ This section explains the steps to collect SQL Server logs from a Kubernetes env 1. Determine the location of the SQL server log file on Kubernetes. This can be determined from the SQLserver.conf for your SQL server cluster along with the mounts on the SQL server pods. 2. Install the Sumo Logic [tailing sidecar operator](https://github.com/SumoLogic/tailing-sidecar/tree/main/operator#deploy-tailing-sidecar-operator). 3. Add the following annotation in addition to the existing annotations. -```xml -annotations: - tailing-sidecar: sidecarconfig;:/ -``` + ``` + annotations: + tailing-sidecar: sidecarconfig;:/ + ``` -Example: - -```bash -annotations: - tailing-sidecar: sidecarconfig;data:/var/opt/mssql/errorlog -``` + Example: + ``` + annotations: + tailing-sidecar: sidecarconfig;data:/var/opt/mssql/errorlog + ``` 1. Make sure that the SQL server pods are running and annotations are applied by using the command: -```xml -kubectl describe pod -``` + ```xml + kubectl describe pod + ``` 2. Sumo Logic Kubernetes collection will automatically start collecting logs from the pods having the annotations defined above. 3. Verify logs in Sumo Logic. 4. Add a FER to normalize the fields in Kubernetes environments. Labels created in Kubernetes environments automatically are prefixed with pod_labels. To normalize these for our app to work, we need to create a Field Extraction Rule if not already created for Proxy Application Components. To do so: @@ -231,23 +230,24 @@ kubectl describe pod 2. Click the **+ Add Rule** button on the top right of the table. 3. The **Add Field Extraction Rule** form will appear. 4. Enter the following options: - * **Rule Name**. Enter the name as **App Observability - Proxy**. - * **Applied At**. Choose **Ingest Time** - * **Scope**. Select **Specific Data** - * **Scope**. Enter the following keyword search expression: - ```sql - pod_labels_environment=* pod_labels_component=database - pod_labels_db_system=* - pod_labels_db_cluster=* - ``` - * **Parse Expression**. Enter the following parse expression: - ```sql - if (!isEmpty(pod_labels_environment), pod_labels_environment, "") as environment - | pod_labels_component as component - | pod_labels_db_system as db_system - | if (!isEmpty(pod_labels_db_cluster), pod_labels_db_cluster, null) as db_cluster - ``` -5. Click **Save** to create the rule. + * **Rule Name**. Enter the name as **App Observability - Proxy**. + * **Applied At**. Choose **Ingest Time** + * **Scope**. Select **Specific Data** + * **Scope**. Enter the following keyword search expression: + ```sql + pod_labels_environment=* pod_labels_component=database + pod_labels_db_system=* + pod_labels_db_cluster=* + ``` + * **Parse Expression**. Enter the following parse expression: + + ```sql + if (!isEmpty(pod_labels_environment), pod_labels_environment, "") as environment + | pod_labels_component as component + | pod_labels_db_system as db_system + | if (!isEmpty(pod_labels_db_cluster), pod_labels_db_cluster, null) as db_cluster + ``` +1. Click **Save** to create the rule. @@ -294,8 +294,8 @@ This section provides instructions for configuring log collection for SQL Server * `db_system = SQLserver` * `db_cluster = `.Enter Default if you do not have one`.` * `environment = `(for example, `Dev`, `QA`, or `Prod`) - * `db_cluster_address` - Enter the cluster hostname or ip address that is used by the application to connect to the database. It could also be the load balancer or proxy endpoint. - * `db_cluster_port` - Enter the database port. If not provided, a default port will be used. + * `db_cluster_address`. Enter the cluster hostname or ip address that is used by the application to connect to the database. It could also be the load balancer or proxy endpoint. + * `db_cluster_port`. Enter the database port. If not provided, a default port will be used. :::note `db_cluster_address` and `db_cluster_port` should reflect the exact configuration of DB client configuration in your application, especially if you instrument it with OT tracing. The values of these fields should match exactly the connection string used by the database client (reported as values for `net.peer.name` and `net.peer.port` metadata fields). @@ -312,7 +312,7 @@ This section provides instructions for configuring log collection for SQL Server * **Timestamp Format.** The timestamp format is automatically detected. * **Encoding.** Select the encoding of the `ERRORLOG` you found in step 1, which will typically be **UTF-16LE**. * **Enable Multiline Processing.** Detect messages spanning multiple lines - * **Infer Boundaries** - Detect message boundaries automatically + * **Infer Boundaries**. Detect message boundaries automatically 5. Click **Save**. At this point, the installed collector will start scanning the `ERRORLOG` and send it to Sumo Logic. @@ -322,8 +322,7 @@ At this point, the installed collector will start scanning the `ERRORLOG` and se Set up a Sumo Logic HTTP Source -1. **Configure a Hosted Collector for Metrics. -To create a new Sumo Logic hosted collector, perform the steps in the [Configure a Hosted Collector](/docs/send-data/hosted-collectors/configure-hosted-collector) documentation. +1. **Configure a Hosted Collector for Metrics**. To create a new Sumo Logic hosted collector, perform the steps in the [Configure a Hosted Collector](/docs/send-data/hosted-collectors/configure-hosted-collector) documentation. 2. Configure an HTTP Logs & Metrics source: * On the created Hosted Collector on the Collection Management screen, select **Add Source**. * Select **HTTP Logs & Metrics.** @@ -363,7 +362,7 @@ Create or modify `telegraf.conf` and copy and paste the text below: [inputs.sqlserver.tags] environment="prod" component="database" - db_cluster: "ENV_TO_BE_CHANGED" + db_cluster= "ENV_TO_BE_CHANGED" db_cluster_address = "ENV_TO_BE_CHANGED" db_cluster_port = "ENV_TO_BE_CHANGED" db_system = "sqlserver" @@ -374,12 +373,12 @@ Create or modify `telegraf.conf` and copy and paste the text below: Enter values for fields annotated with `` to the appropriate values. Do not include the brackets (`< >`) in your final configuration: * Input plugins section, which is `[[inputs.sqlserver]]`: - * `servers` - The URL to the SQL server. For more information on additional parameters to configure the SQL Server input plugin for Telegraf see[ this doc](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/sqlserver). + * `servers`. The URL to the SQL server. For more information on additional parameters to configure the SQL Server input plugin for Telegraf see[ this doc](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/sqlserver). * In the tags section, which is `[inputs.sqlserver.tags]`: * `environment`. This is the deployment environment where the SQL Server cluster identified by the value of **servers** resides. For example; dev, prod, or QA. While this value is optional we highly recommend setting it. * `db_cluster`. Enter a name to identify this SQLserver cluster. This cluster name will be shown in our dashboards. - * `db_cluster_address` - Enter the cluster hostname or ip address that is used by the application to connect to the database. It could also be the load balancer or proxy endpoint. - * `db_cluster_port` - Enter the database port. If not provided, a default port will be used. + * `db_cluster_address`. Enter the cluster hostname or ip address that is used by the application to connect to the database. It could also be the load balancer or proxy endpoint. + * `db_cluster_port`. Enter the database port. If not provided, a default port will be used. :::note `db_cluster_address` and `db_cluster_port` should reflect the exact configuration of DB client configuration in your application, especially if you instrument it with OT tracing. The values of these fields should match exactly the connection string used by the database client (reported as values for `net.peer.name` and `net.peer.port` metadata fields). @@ -392,7 +391,7 @@ Enter values for fields annotated with `` to the appropriate ::: * In the output plugins section, which is `[[outputs.sumologic]]`: - * **URL** - This is the HTTP source URL created previously. See this doc for more information on additional parameters for configuring the Sumo Logic Telegraf output plugin. + * **URL**. This is the HTTP source URL created previously. See this doc for more information on additional parameters for configuring the Sumo Logic Telegraf output plugin. Here’s an explanation for additional values set by this Telegraf configuration. @@ -400,8 +399,8 @@ If you haven’t defined a cluster in SQL Server, then enter `default` for `db_c There are additional values set by the Telegraf configuration. We recommend not to modify these values as they might cause the Sumo Logic app to not function correctly. * `data_format: “prometheus”`. In the output `[[outputs.sumologic]]` plugins section. Metrics are sent in the Prometheus format to Sumo Logic. -* `component = “database”` - In the input `[[inputs.SQLserver]]` plugins section. This value is used by Sumo Logic apps to identify application components. -* `db_system = “sqlserver”` - In the input plugins sections. This value identifies the database system. +* `component = “database”`. In the input `[[inputs.SQLserver]]` plugins section. This value is used by Sumo Logic apps to identify application components. +* `db_system = “sqlserver”`. In the input plugins sections. This value identifies the database system. See [this doc](https://github.com/influxdata/telegraf/blob/master/etc/logrotate.d/telegraf) for all other parameters that can be configured in the Telegraf agent globally. @@ -436,22 +435,20 @@ For details on the individual alerts, see [Alerts](#microsoft-sql-server-alerts) Custom filter examples: -1. For alerts applicable only to a specific cluster, your custom filter would be: ‘`db_cluster=sqlserver-prod.01`‘ -2. For alerts applicable to all clusters that start with Kafka-prod, your custom filter would be: `db_cluster=sql-prod*` -3. For alerts applicable to a specific cluster within a production environment, your custom filter would be: `db_cluster=sql-1 `AND `environment=prod `(This assumes you have set the optional environment tag while configuring collection) +1. For alerts applicable only to a specific cluster, your custom filter would be: ‘`db_cluster=sqlserver-prod.01`‘. +2. For alerts applicable to all clusters that start with Kafka-prod, your custom filter would be: `db_cluster=sql-prod*`. +3. For alerts applicable to a specific cluster within a production environment, your custom filter would be: `db_cluster=sql-1 `AND `environment=prod `(This assumes you have set the optional environment tag while configuring collection). 4. [**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Monitoring > Monitors**.
[**New UI**](/docs/get-started/sumo-logic-ui). In the main Sumo Logic menu, select **Alerts > Monitors**. You can also click the **Go To...** menu at the top of the screen and select **Monitors**. -5. Click **Add**: +5. Click **Add**. 6. Click **Import**, then copy paste the above JSON to import monitors. - The monitors are disabled by default. Once you have installed the alerts using this method, navigate to the MySQL folder under **Monitors** to configure them. See [this](/docs/alerts/monitors) document to enable monitors to send notifications to teams or connections. Please see the instructions detailed in Step 4 of this [document](/docs/alerts/monitors/create-monitor). - ### Method B: Using a Terraform script **Step 1: Generate a Sumo Logic access key and ID** -Generate an access key and access ID for a user that has the Manage Monitors role capability in Sumo Logic using instructions in [Access Keys](/docs/manage/security/access-keys). Please identify which deployment your Sumo Logic account is in, using this [ link](/docs/api/getting-started#sumo-logic-endpoints-by-deployment-and-firewall-security). +Generate an access key and access ID for a user that has the Manage Monitors role capability in Sumo Logic using instructions in [Access Keys](/docs/manage/security/access-keys). Please identify which deployment your Sumo Logic account is in, using this [link](/docs/api/getting-started#sumo-logic-endpoints-by-deployment-and-firewall-security). **Step 2: [Download and install Terraform 0.13](https://www.terraform.io/downloads.html) or later** @@ -473,9 +470,9 @@ environment = "" The Terraform script installs the alerts without any scope filters, if you would like to restrict the alerts to specific clusters or environments, update the variable `sqlserver_data_source`. Custom filter examples: -1. A specific cluster `db_cluster=sqlserver.prod.01` -2. All clusters in an environment `environment=prod` -1. For alerts applicable to all clusters that start with sqlserver-prod, your custom filter would be: `db_cluster=sqlserver-prod*` +1. A specific cluster `db_cluster=sqlserver.prod.01`. +2. All clusters in an environment `environment=prod`. +1. For alerts applicable to all clusters that start with sqlserver-prod, your custom filter would be: `db_cluster=sqlserver-prod*`. 2. For alerts applicable to a specific cluster within a production environment, your custom filter would be: `db_cluster=sqlserver-1` and `environment=prod` (This assumes you have set the optional environment tag while configuring collection) @@ -539,7 +536,7 @@ If you haven’t enabled alerts and/or configured notifications through the Terr There are limits to how many alerts can be enabled - please see the [Alerts FAQ](/docs/alerts/monitors/monitor-faq.md). -## Installing the Microsoft SQL Server App +## Installing the Microsoft SQL Server app This section demonstrates how to install the SQL Server App. @@ -547,7 +544,7 @@ import AppInstall from '../../reuse/apps/app-install.md'; -## Viewing Microsoft SQL Server Dashboards +## Viewing Microsoft SQL Server dashboards :::tip Filter with template variables Template variables provide dynamic dashboards that can rescope data on the fly. As you apply variables to troubleshoot through your dashboard, you view dynamic changes to the data for a quicker resolution to the root cause. You can use template variables to drill down and examine the data on a granular level. For more information, see [Filter with template variables](/docs/dashboards/filter-template-variables.md). @@ -563,7 +560,6 @@ Use this dashboard to: Overview - ### General Health The **SQL Server - General Health** dashboard gives you the overall health of SQL Server. Use this dashboard to analyze server events including stopped/up servers, and corresponding down/uptime, monitor disk space percentage utilization, wait time trend, app-domain issues by SQL server.