From ee280a44977553e9ade3c484de99404dc370f3f3 Mon Sep 17 00:00:00 2001 From: Ashwin Mehendale Date: Sat, 7 Jun 2025 17:41:18 +0530 Subject: [PATCH] OBSDOCS-2306: PT2: Port the Log collection and forwarding chapter to 6.x --- configuring/configuring-log-forwarding.adoc | 17 ++- ...ter-logging-collector-log-forward-gcp.adoc | 46 ++++---- ...og-forward-logs-from-application-pods.adoc | 61 +++++----- ...logging-collector-log-forward-project.adoc | 92 ++++++--------- modules/logging-forward-splunk.adoc | 77 ++++++------ modules/logging-forwarding-azure.adoc | 110 +++++------------- 6 files changed, 177 insertions(+), 226 deletions(-) diff --git a/configuring/configuring-log-forwarding.adoc b/configuring/configuring-log-forwarding.adoc index 83282be4bc32..70adf31a62c8 100644 --- a/configuring/configuring-log-forwarding.adoc +++ b/configuring/configuring-log-forwarding.adoc @@ -111,10 +111,21 @@ The order of filterRefs matters, as they are applied sequentially. Earlier filte Filters are configured in an array under `spec.filters`. They can match incoming log messages based on the value of structured fields and modify or drop them. -Administrators can configure the following types of filters: include::modules/enabling-multi-line-exception-detection.adoc[leveloffset=+2] -include::modules/logging-http-forward.adoc[leveloffset=+2] + +include::modules/cluster-logging-collector-log-forward-gcp.adoc[leveloffset=+1] + +include::modules/logging-forward-splunk.adoc[leveloffset=+1] + +include::modules/logging-http-forward.adoc[leveloffset=+1] + +include::modules/logging-forwarding-azure.adoc[leveloffset=+1] + +include::modules/cluster-logging-collector-log-forward-project.adoc[leveloffset=+1] + +include::modules/cluster-logging-collector-log-forward-logs-from-application-pods.adoc[leveloffset=+1] + include::modules/cluster-logging-collector-log-forward-syslog.adoc[leveloffset=+2] @@ -135,6 +146,7 @@ On {sts-short}-enabled clusters such as {product-rosa}, {aws-short} roles are pr * xref:../modules/cluster-logging-collector-log-forward-sts-cloudwatch.adoc#cluster-logging-collector-log-forward-sts-cloudwatch_configuring-log-forwarding[Forwarding logs to Amazon CloudWatch from STS enabled clusters] //// + * Creating a secret for CloudWatch with an existing {aws-short} role * Forwarding logs to Amazon CloudWatch from STS-enabled clusters @@ -146,7 +158,6 @@ If you do not have an {aws-short} IAM role pre-configured with trust policies, y * xref:../modules/cluster-logging-collector-log-forward-secret-cloudwatch.adoc#cluster-logging-collector-log-forward-secret-cloudwatch_configuring-log-forwarding[Creating a secret for AWS CloudWatch with an existing AWS role] * xref:../modules/cluster-logging-collector-log-forward-sts-cloudwatch.adoc#cluster-logging-collector-log-forward-sts-cloudwatch[Forwarding logs to Amazon CloudWatch from STS enabled clusters] //// - include::modules/creating-an-aws-role.adoc[leveloffset=+2] include::modules/cluster-logging-collector-log-forward-secret-cloudwatch.adoc[leveloffset=+2] include::modules/cluster-logging-collector-log-forward-sts-cloudwatch.adoc[leveloffset=+2] diff --git a/modules/cluster-logging-collector-log-forward-gcp.adoc b/modules/cluster-logging-collector-log-forward-gcp.adoc index 661572be01fa..44e229b366a8 100644 --- a/modules/cluster-logging-collector-log-forward-gcp.adoc +++ b/modules/cluster-logging-collector-log-forward-gcp.adoc @@ -6,16 +6,16 @@ [id="cluster-logging-collector-log-forward-gcp_{context}"] = Forwarding logs to Google Cloud Platform (GCP) -You can forward logs to link:https://cloud.google.com/logging/docs/basic-concepts[Google Cloud Logging] in addition to, or instead of, the internal default {ocp-product-title} log store. +You can forward logs to link:https://cloud.google.com/logging/docs/basic-concepts[Google Cloud Logging]. -[NOTE] +[IMPORTANT] ==== -Using this feature with Fluentd is not supported. +Forwarding logs to GCP is not supported on Red{nbsp}Hat OpenShift on AWS. ==== .Prerequisites -* {clo} 5.5.1 and later +* {clo} has been installed. .Procedure @@ -23,42 +23,48 @@ Using this feature with Fluentd is not supported. + [source,terminal,subs="+quotes"] ---- -$ oc -n openshift-logging create secret generic gcp-secret --from-file google-application-credentials.json=__ +$ oc -n openshift-logging create secret generic gcp-secret --from-file google-application-credentials.json= ---- + . Create a `ClusterLogForwarder` Custom Resource YAML using the template below: + [source,yaml] ---- -apiVersion: logging.openshift.io/v1 +apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: - name: <1> - namespace: <2> + name: + namespace: openshift-logging spec: - serviceAccountName: <3> + serviceAccount: + name: #<1> outputs: - name: gcp-1 type: googleCloudLogging - secret: - name: gcp-secret googleCloudLogging: - projectId : "openshift-gce-devel" <4> - logId : "app-gcp" <5> + authentication: + credentials: + secretName: gcp-secret + key: google-application-credentials.json + id: + type : project + value: openshift-gce-devel #<2> + logId : app-gcp #<3> pipelines: - name: test-app - inputRefs: <6> + inputRefs: #<4> - application outputRefs: - gcp-1 ---- -<1> In legacy implementations, the CR name must be `instance`. In multi log forwarder implementations, you can use any name. -<2> In legacy implementations, the CR namespace must be `openshift-logging`. In multi log forwarder implementations, you can use any namespace. -<3> The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the `openshift-logging` namespace. -<4> Set a `projectId`, `folderId`, `organizationId`, or `billingAccountId` field and its corresponding value, depending on where you want to store your logs in the link:https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy[GCP resource hierarchy]. -<5> Set the value to add to the `logName` field of the link:https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry[Log Entry]. -<6> Specify which log types to forward by using the pipeline: `application`, `infrastructure`, or `audit`. + +<1> The name of your service account. +<2> Set a `project`, `folder`, `organization`, or `billingAccount` field and its corresponding value, depending on where you want to store your logs in the GCP resource hierarchy. +<3> Set the value to add to the `logName` field of the log entry. The value can be a combination of static and dynamic values consisting of field paths followed by `||`, followed by another field path or a static value. A dynamic value must be encased in single curly brackets `{}` and must end with a static fallback value separated with `||`. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. +<4> Specify the the names of inputs, defined in the `input.name` field for this pipeline. You can also use the built-in values `application`, `infrastructure`, `audit`. [role="_additional-resources"] .Additional resources * link:https://cloud.google.com/billing/docs/concepts[Google Cloud Billing Documentation] +* link:https://cloud.google.com/logging/docs[Cloud Logging documentation] for Google GCP. * link:https://cloud.google.com/logging/docs/view/logging-query-language[Google Cloud Logging Query Language Documentation] diff --git a/modules/cluster-logging-collector-log-forward-logs-from-application-pods.adoc b/modules/cluster-logging-collector-log-forward-logs-from-application-pods.adoc index 40c7dc284a3b..7513f6e308f7 100644 --- a/modules/cluster-logging-collector-log-forward-logs-from-application-pods.adoc +++ b/modules/cluster-logging-collector-log-forward-logs-from-application-pods.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * observability/logging/log_collection_forwarding/configuring-log-forwarding.adoc +// * configuring/configuring-log-forwarding.adoc :_mod-docs-content-type: PROCEDURE [id="cluster-logging-collector-log-forward-logs-from-application-pods_{context}"] @@ -16,42 +16,41 @@ To specify the pod labels, you use one or more `matchLabels` key-value pairs. If . Create or edit a YAML file that defines the `ClusterLogForwarder` CR object. In the file, specify the pod labels using simple equality-based selectors under `inputs[].name.application.selector.matchLabels`, as shown in the following example. + -.Example `ClusterLogForwarder` CR YAML file [source,yaml] ---- -apiVersion: logging.openshift.io/v1 +apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: - name: <1> - namespace: <2> + name: + namespace: spec: - pipelines: - - inputRefs: [ myAppLogData ] <3> - outputRefs: [ default ] <4> - inputs: <5> - - name: myAppLogData - application: - selector: - matchLabels: <6> - environment: production - app: nginx - namespaces: <7> - - app1 - - app2 - outputs: <8> + serviceAccount: + name: #<1> + outputs: - - ... + # ... + inputs: + - name: exampleAppLogData #<2> + type: application #<3> + application: + includes: #<4> + - namespace: app1 + - namespace: app2 + selector: + matchLabels: #<5> + environment: production + app: nginx + pipelines: + - inputRefs: + - exampleAppLogData + outputRefs: + # ... ---- -<1> In legacy implementations, the CR name must be `instance`. In multi log forwarder implementations, you can use any name. -<2> In legacy implementations, the CR namespace must be `openshift-logging`. In multi log forwarder implementations, you can use any namespace. -<3> Specify one or more comma-separated values from `inputs[].name`. -<4> Specify one or more comma-separated values from `outputs[]`. -<5> Define a unique `inputs[].name` for each application that has a unique set of pod labels. -<6> Specify the key-value pairs of pod labels whose log data you want to gather. You must specify both a key and value, not just a key. To be selected, the pods must match all the key-value pairs. -<7> Optional: Specify one or more namespaces. -<8> Specify one or more outputs to forward your log data to. - -. Optional: To restrict the gathering of log data to specific namespaces, use `inputs[].name.application.namespaces`, as shown in the preceding example. +<1> Specify the service account name. +<2> Specify a name for the input. +<3> Specify the type as `application` to collect logs from applications. +<4> Specify the set of namespaces to include when collecting logs. +<5> Specify the key-value pairs of pod labels whose log data you want to gather. You must specify both a key and value, not just a key. To be selected, the pods must match all the key-value pairs. . Optional: You can send log data from additional applications that have different pod labels to the same pipeline. .. For each unique combination of pod labels, create an additional `inputs[].name` section similar to the one shown. @@ -72,4 +71,4 @@ $ oc create -f .yaml [role="_additional-resources"] .Additional resources -* For more information on `matchLabels` in Kubernetes, see link:https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#resources-that-support-set-based-requirements[Resources that support set-based requirements]. +* link:https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#resources-that-support-set-based-requirements[Resources that support set-based requirements]. diff --git a/modules/cluster-logging-collector-log-forward-project.adoc b/modules/cluster-logging-collector-log-forward-project.adoc index 1ff4cac71296..e92796d3a8a4 100644 --- a/modules/cluster-logging-collector-log-forward-project.adoc +++ b/modules/cluster-logging-collector-log-forward-project.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * observability/logging/log_collection_forwarding/configuring-log-forwarding.adoc +// * configuring/configuring-log-forwarding.adoc :_mod-docs-content-type: PROCEDURE [id="cluster-logging-collector-log-forward-project_{context}"] @@ -15,71 +15,53 @@ To configure forwarding application logs from a project, you must create a `Clus * You must have a logging server that is configured to receive the logging data using the specified protocol or format. .Procedure - + . Create or edit a YAML file that defines the `ClusterLogForwarder` CR: + .Example `ClusterLogForwarder` CR [source,yaml] ---- -apiVersion: logging.openshift.io/v1 +apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: - name: instance <1> - namespace: openshift-logging <2> + name: + namespace: spec: + serviceAccount: + name: outputs: - - name: fluentd-server-secure <3> - type: fluentdForward <4> - url: 'tls://fluentdserver.security.example.com:24224' <5> - secret: <6> - name: fluentd-secret - - name: fluentd-server-insecure - type: fluentdForward - url: 'tcp://fluentdserver.home.example.com:24224' - inputs: <7> - - name: my-app-logs - application: - namespaces: - - my-project <8> + - name: + type: + inputs: + - name: my-app-logs #<1> + type: application #<2> + application: + includes: #<3> + - namespace: my-project + filters: + - name: my-project-labels + type: openshiftLabels + openshiftLabels: #<4> + project: my-project + - name: cluster-labels + type: openshiftLabels + openshiftLabels: + clusterId: C1234 pipelines: - - name: forward-to-fluentd-insecure <9> - inputRefs: <10> - - my-app-logs - outputRefs: <11> - - fluentd-server-insecure - labels: - project: "my-project" <12> - - name: forward-to-fluentd-secure <13> - inputRefs: - - application <14> - - audit - - infrastructure - outputRefs: - - fluentd-server-secure - - default - labels: - clusterId: "C1234" + - name: #<5> + inputRefs: + - my-app-logs + outputRefs: + - + filterRefs: + - my-project-labels + - cluster-labels ---- -<1> The name of the `ClusterLogForwarder` CR must be `instance`. -<2> The namespace for the `ClusterLogForwarder` CR must be `openshift-logging`. -<3> The name of the output. -<4> The output type: `elasticsearch`, `fluentdForward`, `syslog`, or `kafka`. -<5> The URL and port of the external log aggregator as a valid absolute URL. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. -<6> If using a `tls` prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the `openshift-logging` project and have *tls.crt*, *tls.key*, and *ca-bundle.crt* keys that each point to the certificates they represent. -<7> The configuration for an input to filter application logs from the specified projects. -<8> If no namespace is specified, logs are collected from all namespaces. -<9> The pipeline configuration directs logs from a named input to a named output. In this example, a pipeline named `forward-to-fluentd-insecure` forwards logs from an input named `my-app-logs` to an output named `fluentd-server-insecure`. -<10> A list of inputs. -<11> The name of the output to use. -<12> Optional: String. One or more labels to add to the logs. -<13> Configuration for a pipeline to send logs to other log aggregators. -+ -* Optional: Specify a name for the pipeline. -* Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`. -* Specify the name of the output to use when forwarding logs with this pipeline. -* Optional: Specify the `default` output to forward logs to the default log store. -* Optional: String. One or more labels to add to the logs. -<14> Note that application logs from all namespaces are collected when using this configuration. +<1> Specify the name for the input. +<2> Specify the type as `application` to collect logs from applications. +<3> Specify the set of namespaces and containers to include when collecting logs. +<4> Specify the labels to be applied to log records passing through this pipeline. These labels appear in the `openshift.labels` map in the log record. +<5> Specify a name for the pipeline. . Apply the `ClusterLogForwarder` CR by running the following command: + diff --git a/modules/logging-forward-splunk.adoc b/modules/logging-forward-splunk.adoc index 557073718864..69a43e012344 100644 --- a/modules/logging-forward-splunk.adoc +++ b/modules/logging-forward-splunk.adoc @@ -1,22 +1,17 @@ // Module included in the following assemblies: // -// * observability/logging/log_collection_forwarding/configuring-log-forwarding.adoc +// * configuring/configuring-log-forwarding.adoc :_mod-docs-content-type: PROCEDURE [id="logging-forward-splunk_{context}"] = Forwarding logs to Splunk -You can forward logs to the link:https://docs.splunk.com/Documentation/Splunk/9.0.0/Data/UsetheHTTPEventCollector[Splunk HTTP Event Collector (HEC)] in addition to, or instead of, the internal default {ocp-product-title} log store. +You can forward logs to the Splunk HTTP Event Collector (HEC). -[NOTE] -==== -Using this feature with Fluentd is not supported. -==== .Prerequisites -* {clo} 5.6 or later -* A `ClusterLogging` instance with `vector` specified as the collector -* Base64 encoded Splunk HEC token +* {clo} has been installed +* You have obtained a Base64 encoded Splunk HEC token. .Procedure @@ -26,39 +21,55 @@ Using this feature with Fluentd is not supported. ---- $ oc -n openshift-logging create secret generic vector-splunk-secret --from-literal hecToken= ---- -+ + . Create or edit the `ClusterLogForwarder` Custom Resource (CR) using the template below: + [source,yaml] ---- -apiVersion: logging.openshift.io/v1 +apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: - name: <1> - namespace: <2> + name: + namespace: openshift-logging spec: - serviceAccountName: <3> + serviceAccount: + name: #<1> outputs: - - name: splunk-receiver <4> - secret: - name: vector-splunk-secret <5> - type: splunk <6> - url: <7> - pipelines: <8> - - inputRefs: + - name: splunk-receiver #<2> + type: splunk #<3> + splunk: + url: '' #<5> + authentication: + token: + secretName: splunk-secret + key: hecToken # <4> + index: '{.log_type||"undefined"}' #<6> + source: '{.log_source||"undefined"}' #<7> + indexedFields: ['.log_type', '.log_source'] #<8> + payloadKey: '.kubernetes' #<9> + tuning: + compression: gzip #<10> + pipelines: + - name: my-logs + inputRefs: #<11> - application - infrastructure - name: <9> outputRefs: - - splunk-receiver <10> + - splunk-receiver #<12> ---- -<1> In legacy implementations, the CR name must be `instance`. In multi log forwarder implementations, you can use any name. -<2> In legacy implementations, the CR namespace must be `openshift-logging`. In multi log forwarder implementations, you can use any namespace. -<3> The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the `openshift-logging` namespace. -<4> Specify a name for the output. -<5> Specify the name of the secret that contains your HEC token. -<6> Specify the output type as `splunk`. -<7> Specify the URL (including port) of your Splunk HEC. -<8> Specify which log types to forward by using the pipeline: `application`, `infrastructure`, or `audit`. -<9> Optional: Specify a name for the pipeline. -<10> Specify the name of the output to use when forwarding logs with this pipeline. +<1> The name of your service account. +<2> Specify a name for the output. +<3> Specify the output type as `splunk`. +<4> Specify the name of the secret that contains your HEC token. +<5> Specify the URL, including port, of your Splunk HEC. +<6> Specify the name of the index to send events to. If you do not specify an index, the default index of the splunk server configuration is used. This is an optional field. +<7> Specify the source of events to be sent to this sink. You can configure dynamic per-event values. This field is optional. +<8> Specify the fields to be added to the Splunk index. This field is optional. +<9> Specify the record field to be used as the payload. This field is optional. +<10> Specify the compression configuration, which can be either `gzip` or `none`. The default value is `none`. This field is optional. +<11> Specify the input names. +<12> Specify the name of the output to use when forwarding logs with this pipeline. + +[role="_additional-resources"] +.Additional resources +* link:https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector[Set up and use HTTP Event Collector in Splunk Web in Splunk documentation] diff --git a/modules/logging-forwarding-azure.adoc b/modules/logging-forwarding-azure.adoc index ef642021dd04..54e3c344a1e8 100644 --- a/modules/logging-forwarding-azure.adoc +++ b/modules/logging-forwarding-azure.adoc @@ -4,19 +4,23 @@ :_mod-docs-content-type: PROCEDURE [id="logging-forwarding-azure_{context}"] = Forwarding to Azure Monitor Logs -With {logging} 5.9 and later, you can forward logs to link:https://learn.microsoft.com/en-us/azure/azure-monitor/logs/data-platform-logs[Azure Monitor Logs] in addition to, or instead of, the default log store. This functionality is provided by the link:https://vector.dev/docs/reference/configuration/sinks/azure_monitor_logs/[Vector Azure Monitor Logs sink]. + +You can forward logs to link:https://learn.microsoft.com/en-us/azure/azure-monitor/logs/data-platform-logs[Azure Monitor Logs]. This functionality is provided by the link:https://vector.dev/docs/reference/configuration/sinks/azure_monitor_logs/[Vector Azure Monitor Logs sink]. .Prerequisites -* You are familiar with how to administer and create a `ClusterLogging` custom resource (CR) instance. -* You are familiar with how to administer and create a `ClusterLogForwarder` CR instance. -* You understand the `ClusterLogForwarder` CR specifications. * You have basic familiarity with Azure services. * You have an Azure account configured for Azure Portal or Azure CLI access. * You have obtained your Azure Monitor Logs primary or the secondary security key. * You have determined which log types to forward. +* You installed the {oc-first}. +* You have installed {clo}. +* You have administrator permissions. + -To enable log forwarding to Azure Monitor Logs via the HTTP Data Collector API: +.Procedure + +. Enable log forwarding to Azure Monitor Logs via the HTTP Data Collector API: Create a secret with your shared key: [source,yaml] @@ -32,7 +36,7 @@ data: ---- <1> Must contain a primary or secondary key for the link:https://learn.microsoft.com/en-us/azure/azure-monitor/logs/log-analytics-workspace-overview[Log Analytics workspace] making the request. -To obtain a link:https://learn.microsoft.com/en-us/rest/api/storageservices/authorize-with-shared-key[shared key], you can use this command in Azure CLI: +. To obtain a link:https://learn.microsoft.com/en-us/rest/api/storageservices/authorize-with-shared-key[shared key], you can use this command in Azure CLI: [source,text] ---- @@ -40,98 +44,36 @@ Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName "" ---- -Create or edit your `ClusterLogForwarder` CR using the template matching your log selection. +. Create or edit your `ClusterLogForwarder` CR using the template matching your log selection. .Forward all logs [source,yaml] ---- -apiVersion: "logging.openshift.io/v1" -kind: "ClusterLogForwarder" +apiVersion: observability.openshift.io/v1 +kind: ClusterLogForwarder metadata: - name: instance + name: namespace: openshift-logging spec: + serviceAccount: + name: #<1> outputs: - name: azure-monitor type: azureMonitor azureMonitor: - customerId: my-customer-id # <1> - logType: my_log_type # <2> - secret: - name: my-secret - pipelines: - - name: app-pipeline - inputRefs: - - application - outputRefs: - - azure-monitor ----- -<1> Unique identifier for the Log Analytics workspace. Required field. -<2> link:https://learn.microsoft.com/en-us/azure/azure-monitor/logs/data-collector-api?tabs=powershell#record-type-and-properties[Azure record type] of the data being submitted. May only contain letters, numbers, and underscores (_), and may not exceed 100 characters. - -.Forward application and infrastructure logs -[source,yaml] ----- -apiVersion: "logging.openshift.io/v1" -kind: "ClusterLogForwarder" -metadata: - name: instance - namespace: openshift-logging -spec: - outputs: - - name: azure-monitor-app - type: azureMonitor - azureMonitor: - customerId: my-customer-id - logType: application_log # <1> - secret: - name: my-secret - - name: azure-monitor-infra - type: azureMonitor - azureMonitor: - customerId: my-customer-id - logType: infra_log # - secret: - name: my-secret + customerId: my-customer-id # <2> + logType: my_log_type # <3> + authentication: + sharedKey: + secretName: my-secret + key: shared_key pipelines: - name: app-pipeline inputRefs: - application outputRefs: - - azure-monitor-app - - name: infra-pipeline - inputRefs: - - infrastructure - outputRefs: - - azure-monitor-infra ----- -<1> link:https://learn.microsoft.com/en-us/azure/azure-monitor/logs/data-collector-api?tabs=powershell#record-type-and-properties[Azure record type] of the data being submitted. May only contain letters, numbers, and underscores (_), and may not exceed 100 characters. - -.Advanced configuration options -[source,yaml] ----- -apiVersion: "logging.openshift.io/v1" -kind: "ClusterLogForwarder" -metadata: - name: instance - namespace: openshift-logging -spec: - outputs: - - name: azure-monitor - type: azureMonitor - azureMonitor: - customerId: my-customer-id - logType: my_log_type - azureResourceId: "/subscriptions/111111111" # <1> - host: "ods.opinsights.azure.com" # <2> - secret: - name: my-secret - pipelines: - - name: app-pipeline - inputRefs: - - application - outputRefs: - - azure-monitor + - azure-monitor ---- -<1> Resource ID of the Azure resource the data should be associated with. Optional field. -<2> Alternative host for dedicated Azure regions. Optional field. Default value is `ods.opinsights.azure.com`. Default value for Azure Government is `ods.opinsights.azure.us`. +<1> The name of your service account. +<2> Unique identifier for the Log Analytics workspace. Required field. +<3> Record type of the data being submitted. May only contain letters, numbers, and underscores (_), and may not exceed 100 characters. For more information, see link:https://learn.microsoft.com/en-us/azure/azure-monitor/logs/data-collector-api?tabs=powershell#record-type-and-properties[Azure record type] in the Microsoft Azure documentation.