diff --git a/packages/crowdstrike/_dev/build/docs/README.md b/packages/crowdstrike/_dev/build/docs/README.md index f029a3906fa..d45e788e7e0 100644 --- a/packages/crowdstrike/_dev/build/docs/README.md +++ b/packages/crowdstrike/_dev/build/docs/README.md @@ -1,107 +1,79 @@ # CrowdStrike Integration -The [CrowdStrike](https://www.crowdstrike.com/) integration allows you to easily connect your CrowdStrike Falcon platform to Elastic for seamless onboarding of alerts and telemetry from CrowdStrike Falcon and Falcon Data Replicator. Elastic Security can leverage this data for security analytics including correlation, visualization and incident response. It provides support using four different modes for integrating CrowdStrike to the Elastic: +## Overview -1. **Falcon SIEM Connector**: This is a pre-built integration designed to connect CrowdStrike Falcon with Security Information and Event Management (SIEM) systems. It streamlines the flow of security data from CrowdStrike Falcon to the SIEM, providing a standardized and structured way of feeding information into the SIEM platform. It includes the following datasets for receiving logs: +The [CrowdStrike](https://www.crowdstrike.com/) integration allows you to easily connect your CrowdStrike Falcon platform to Elastic for seamless onboarding of alerts and telemetry from CrowdStrike Falcon and Falcon Data Replicator. Elastic Security can leverage this data for security analytics including correlation, visualization and incident response. -- `falcon` dataset: consists of endpoint data and Falcon platform audit data forwarded from Falcon SIEM Connector. +### Compatibility - **Log File Format and Location** +This integration is compatible with CrowdStrike Falcon SIEM-Connector-v2.0, REST API, and CrowdStrike Event Streams API. +For Rest API support, this module has been tested against the **CrowdStrike API Version v1/v2**. - The CrowdStrike integration only supports JSON output format from the SIEM Connector. +### How it works - - Log files are written to multiple rotated output files based on the `output_path` setting in the `cs.falconhoseclient.cfg` file. - - The default output location for the Falcon SIEM Connector is `/var/log/crowdstrike/falconhoseclient/output`. - - By default, files named `output*` in `/var/log/crowdstrike/falconhoseclient` directory contain valid JSON event data and should be used as the source for ingestion. +The integration collects and ingests events from multiple CrowdStrike Falcon data sources into Elasticsearch for security analysis and visualization. - >Note: Files with names like `cs.falconhoseclient-*.log` in the same directory are primarily used for logging internal operations of the Falcon SIEM Connector and are not intended to be consumed by this integration. +![CrowdStrike Integration Flowchart](../img/crowdstrike-flowchart.png) -2. **CrowdStrike REST API**: This provides a programmatic interface to interact with the CrowdStrike Falcon platform. It allows users to perform various operations such as querying information about unified alerts and hosts/devices. It includes the following datasets for receiving logs: +1. **Falcon SIEM Connector**: -- `alert` dataset: It is typically used to retrieve detailed information about unified alerts generated by the CrowdStrike Falcon platform, via Falcon Intelligence Alert API - `/alerts/combined/alerts/v1`. + The Falcon SIEM Connector is a pre-built integration designed to connect CrowdStrike Falcon with Security Information and Event Management (SIEM) systems. It streamlines the flow of security data from CrowdStrike Falcon to the SIEM, providing a standardized and structured way of feeding information into the SIEM platform. The SIEM Connector collects event streams data and sends it to your SIEM. -- `host` dataset: It retrieves all the hosts/devices in your environment providing information such as device metadata, configuration, and status generated by the CrowdStrike Falcon platform, via Falcon Intelligence Host/Device API - `/devices/combined/devices/v1`. It is more focused to provide the management and monitoring information of devices such as login details, status, policies, configuration etc. + Events received from the SIEM Connector are indexed into the `falcon` dataset in Elasticsearch. -- `vulnerability` dataset: It retrieves all the vulnerabilities in your environment, providing information such as severity, status, confidence levels, remediation guidance, and affected hosts, as detected by the CrowdStrike Falcon platform, via the Falcon Spotlight Vulnerability API - `/spotlight/combined/vulnerabilities/v1`. +2. **CrowdStrike Event Streams API**: -3. **Falcon Data Replicator**: This Collect events in near real time from your endpoints and cloud workloads, identities and data. CrowdStrike Falcon Data Replicator (FDR) enables you with actionable insights to improve SOC performance. FDR contains near real-time data collected by the Falcon platform's single, lightweight agent. It includes the following datasets for receiving logs: + The Event Streams API continuously streams security logs from CrowdStrike Falcon, including authentication activity, cloud security posture management (CSPM), firewall logs, user activity, and XDR data. It captures real-time security events like user logins, cloud environment changes, network traffic, and advanced threat detections. The streaming integration provides continuous monitoring and analysis for proactive threat detection. It enhances visibility into user behavior, network security, and overall system health. This setup enables faster response capabilities to emerging security incidents. -- `fdr` dataset: consists of logs forwarded using the [Falcon Data Replicator](https://github.com/CrowdStrike/FDR). + Events retrieved from the Event Streams API are indexed into the `falcon` dataset in Elasticsearch. -4. **CrowdStrike Event Stream**: This streams security logs from CrowdStrike Event Stream, including authentication activity, cloud security posture management (CSPM), firewall logs, user activity, and XDR data. It captures real-time security events like user logins, cloud environment changes, network traffic, and advanced threat detections. The streaming integration provides continuous monitoring and analysis for proactive threat detection. It enhances visibility into user behavior, network security, and overall system health. This setup enables faster response capabilities to emerging security incidents. It includes the following datasets for receiving logs: +3. **CrowdStrike REST API**: -- `falcon` dataset: consists of streaming data forwarded from CrowdStrike Event Stream. + This provides a programmatic interface to interact with the CrowdStrike Falcon platform. It allows users to perform various operations such as querying information about unified alerts, hosts/devices and vulnerabilities. + + It includes the following datasets for receiving logs: + - `alert` dataset: It is typically used to retrieve detailed information about unified alerts generated by the CrowdStrike Falcon platform, via Falcon Intelligence Alert API - `/alerts/combined/alerts/v1`. -## Compatibility + - `host` dataset: It retrieves all the hosts/devices in your environment providing information such as device metadata, configuration, and status generated by the CrowdStrike Falcon platform, via Falcon Intelligence Host/Device API - `/devices/combined/devices/v1`. It is more focused to provide the management and monitoring information of devices such as login details, status, policies, configuration etc. -This integration is compatible with CrowdStrike Falcon SIEM-Connector-v2.0, REST API, and CrowdStrike Event Streaming. -For Rest API support, this module has been tested against the **CrowdStrike API Version v1/v2**. + - `vulnerability` dataset: It retrieves all the vulnerabilities in your environment, providing information such as severity, status, confidence levels, remediation guidance, and affected hosts, as detected by the CrowdStrike Falcon platform, via the Falcon Spotlight Vulnerability API - `/spotlight/combined/vulnerabilities/v1`. -## Requirements +4. **Falcon Data Replicator (FDR)**: -### Agentless enabled integration -Agentless integrations allow you to collect data without having to manage Elastic Agent in your cloud. They make manual agent deployment unnecessary, so you can focus on your data instead of the agent that collects it. For more information, refer to [Agentless integrations](https://www.elastic.co/guide/en/serverless/current/security-agentless-integrations.html) and the [Agentless integrations FAQ](https://www.elastic.co/guide/en/serverless/current/agentless-integration-troubleshooting.html). + The FDR feed consists of regular transfers of data (data dumps) rather than ongoing streams of data from your endpoints, cloud workloads, identities, via the Falcon platform’s lightweight agent. CrowdStrike Falcon Data Replicator (FDR) enables you with actionable insights to improve SOC performance. FDR isn't useful for real-time alerts because it's not an ongoing stream of data. -Agentless deployments are only supported in Elastic Serverless and Elastic Cloud environments. This functionality is in beta and is subject to change. Beta features are not subject to the support SLA of official GA features. + Logs received from the Falcon Data Replicator are indexed into the `fdr` dataset in Elasticsearch. -### Agent based installation +## What data does this integration collect? -Elastic Agent must be installed. For more details, check the Elastic Agent [installation instructions](docs-content://reference/fleet/install-elastic-agents.md). -You can install only one Elastic Agent per host. -Elastic Agent is required to stream data from the GCP Pub/Sub or REST API and ship the data to Elastic, where the events will then be processed via the integration's ingest pipelines. +This integration collects: +- **Sensor events** — Endpoint telemetry generated by the Falcon sensor installed on your hosts. +- **Cloud events** — Non-sensor events generated in the CrowdStrike cloud, such as detection summaries, cloud security posture (CSPM) findings, and other platform activity. +- **Detections and automated leads** — Unified detection events and automated threat leads triggered in the Falcon console. +- **Host inventory** — Information about all registered hosts and devices, including configuration, policy, and operational details. +- **Vulnerability data** — Insights into detected vulnerabilities with severity, affected assets, and remediation details. -## Setup +## What do I need to use this integration? -### Collect data from CrowdStrike REST API +This section describes the requirements and configuration details for each supported data source. -The following parameters from your CrowdStrike instance are required: +### Collect data via CrowdStrike Falcon SIEM Connector -1. Client ID -2. Client Secret -3. Token url -4. API Endpoint url -5. Required scopes for each data stream : +To collect data using the Falcon SIEM Connector, you need the file path where the connector stores event data received from the Event Streams. +This is same as the `output_path` setting in the `cs.falconhoseclient.cfg` configuration file. - | Data Stream | Scope | - | ------------- | ------------- | - | Alert | read:alert | - | Host | read:host | - | Vulnerability | read:vulnerability | +The integration supports only JSON output format from the Falcon SIEM Connector. Other formats such as Syslog and CEF are not supported. -### Collect data from CrowdStrike Event Stream +Additionally, this integration collect logs only through the file system. Ingestion via a Syslog server is not supported. -The following parameters from your CrowdStrike instance are required: +:::{note} +The log files are written to multiple rotated output files based on the `output_path` setting in the `cs.falconhoseclient.cfg` file. The default output location for the Falcon SIEM Connector is `/var/log/crowdstrike/falconhoseclient/output`. +By default, files named `output*` in `/var/log/crowdstrike/falconhoseclient` directory contain valid JSON event data and should be used as the source for ingestion. -1. Client ID -2. Client Secret -3. Token URL -4. API Endpoint URL -5. CrowdStrike App ID -6. Required scopes for event stream: +Files with names like `cs.falconhoseclient-*.log` in the same directory are primarily used for logging internal operations of the Falcon SIEM Connector and are not intended to be consumed by this integration. +::: - | Data Stream | Scope | - | ------------- | ------------------- | - | Event Stream | read: Event streams | - -## Logs - -### Alert - -This is the `Alert` dataset. - -#### Example - -{{event "alert"}} - -{{fields "alert"}} - -### Falcon - -Contains endpoint data and CrowdStrike Falcon platform audit data forwarded from Falcon SIEM Connector. - -#### Falcon SIEM Connector configuration file - -By default, the configuration file located at `/opt/crowdstrike/etc/cs.falconhoseclient.cfg` provides configuration options related to the events collected by Falcon SIEM Connector. +By default, the configuration file for the Falcon SIEM Connector is located at `/opt/crowdstrike/etc/cs.falconhoseclient.cfg` which provides configuration options related to the events collected by Falcon SIEM Connector. Parts of the configuration file called `EventTypeCollection` and `EventSubTypeCollection` provides a list of event types that the connector should collect. @@ -124,11 +96,42 @@ Current supported event types are: - XDR Detection events - Scheduled Report Notification events -{{fields "falcon"}} +### Collect data via CrowdStrike Event Streams -{{event "falcon"}} +The following parameters from your CrowdStrike instance are required: -### FDR +1. Client ID +2. Client Secret +3. Token URL +4. API Endpoint URL +5. CrowdStrike App ID +6. Required scopes for event streams: + + | Data Stream | Scope | + | ------------- | ------------------- | + | Event Stream | read: Event streams | + +:::{note} +You can use the Falcon SIEM Connector as an alternative to the event streams API. +::: + +### Collect data via CrowdStrike REST API + +The following parameters from your CrowdStrike instance are required: + +1. Client ID +2. Client Secret +3. Token url +4. API Endpoint url +5. Required scopes for each data stream: + + | Data Stream | Scope | + | ------------- | ------------- | + | Alert | read:alert | + | Host | read:host | + | Vulnerability | read:vulnerability | + +### Collect data via CrowdStrike Falcon Data Replicator (FDR) The CrowdStrike Falcon Data Replicator (FDR) allows CrowdStrike users to replicate FDR data from CrowdStrike managed S3 buckets. CrowdStrike writes notification events to a CrowdStrike managed SQS queue when new data is @@ -145,8 +148,7 @@ Agent with this integration if needed and not have duplicate events, but it mean This is the simplest way to setup the integration, and also the default. -You need to set the integration up with the SQS queue URL provided by Crowdstrike FDR. -Ensure the `Is FDR queue` option is enabled. +You need to set the integration up with the SQS queue URL provided by CrowdStrike FDR. #### Use with FDR tool and data replicated to a self-managed S3 bucket @@ -159,9 +161,10 @@ You need to follow the steps below: - Configure your S3 bucket to send object created notifications to your SQS queue. - Follow the [FDR tool](https://github.com/CrowdStrike/FDR) instructions to replicate data to your own S3 bucket. - Configure the integration to read from your self-managed SQS topic. -- Disable the `Is FDR queue` option in the integration. -> NOTE: While the FDR tool can replicate the files from S3 to your local file system, this integration cannot read those files because they are gzip compressed, and the log file input does not support reading compressed files. +:::{note} +While the FDR tool can replicate the files from S3 to your local file system, this integration cannot read those files because they are gzip compressed, and the log file input does not support reading compressed files. +::: #### Configuration for the S3 input @@ -254,9 +257,30 @@ and/or `session_token`. Please see[Create Shared Credentials File](https://docs.aws.amazon.com/ses/latest/DeveloperGuide/create-shared-credentials-file.html) for more details. -#### Troubleshooting +## How do I deploy this integration? + +1. In Kibana, go to **Management > Integrations**. +2. In the "Search for integrations" search bar, type **CrowdStrike**. +3. Click the **CrowdStrike** integration from the search results. +4. Click the **Add CrowdStrike** button to add the integration. +5. Configure the integration. +6. Click **Save and Continue** to save the integration. + +### Agentless enabled integration + +Agentless integrations allow you to collect data without having to manage Elastic Agent in your cloud. They make manual agent deployment unnecessary, so you can focus on your data instead of the agent that collects it. For more information, refer to [Agentless integrations](https://www.elastic.co/guide/en/serverless/current/security-agentless-integrations.html) and the [Agentless integrations FAQ](https://www.elastic.co/guide/en/serverless/current/agentless-integration-troubleshooting.html). + +Agentless deployments are only supported in Elastic Serverless and Elastic Cloud environments. This functionality is in beta and is subject to change. Beta features are not subject to the support SLA of official GA features. + +### Agent based installation + +Elastic Agent must be installed. For more details, check the Elastic Agent [installation instructions](docs-content://reference/fleet/install-elastic-agents.md). +You can install only one Elastic Agent per host. +Elastic Agent is required to stream data from the GCP Pub/Sub or REST API and ship the data to Elastic, where the events will then be processed via the integration's ingest pipelines. -##### Vulnerability API returns 404 Not found +## Troubleshooting + +### Vulnerability API returns 404 Not found This error may occur for the following reasons: 1. Too many records in the response. @@ -264,14 +288,14 @@ This error may occur for the following reasons: To resolve this, adjust the `Batch Size` setting in the integration to reduce the number of records returned per pagination call. -##### Duplicate Events +### Duplicate Events The option `Enable Data Deduplication` allows you to avoid consuming duplicate events. By default, this option is set to `false`, and so duplicate events may be ingested. When this option is enabled, a [fingerprint processor](https://www.elastic.co/guide/en/elasticsearch/reference/current/fingerprint-processor.html) is used to calculate a hash from a set of Crowdstrike fields that uniquely identifies the event. The hash is assigned to the Elasticsearch [`_id`](https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-id-field.html) field that makes the document unique, thus avoiding duplicates. If duplicate events are ingested, to help find them, the integration `event.id` field is populated by concatenating a few Crowdstrike fields that uniquely identifies the event. These fields are `id`, `aid`, and `cid` from the Crowdstrike event. The fields are separated with pipe `|`. For example, if your Crowdstrike event contains `id: 123`, `aid: 456`, and `cid: 789` then the `event.id` would be `123|456|789`. -#### Alert severity mapping +### Alert severity mapping The values used in `event.severity` are consistent with Elastic Detection Rules. @@ -284,7 +308,7 @@ The values used in `event.severity` are consistent with Elastic Detection Rules. If the severity name is not available from the original document, it is determined from the numeric severity value according to the following table. -| Crowdstrike `severity` | Severity Name | +| Crowdstrike Severity | Severity Name | |------------------------|:-------------:| | 0 - 19 | info | | 20 - 39 | low | @@ -292,6 +316,32 @@ If the severity name is not available from the original document, it is determin | 60 - 79 | high | | 80 - 100 | critical | +## Logs + +### Alert + +This is the `alert` dataset. + +#### Example + +{{event "alert"}} + +{{fields "alert"}} + +### Falcon + +This is the `falcon` dataset. + +#### Example + +{{fields "falcon"}} + +{{event "falcon"}} + +### FDR + +This is the `fdr` dataset. + #### Example {{fields "fdr"}} @@ -300,7 +350,7 @@ If the severity name is not available from the original document, it is determin ### Host -This is the `Host` dataset. +This is the `host` dataset. #### Example @@ -310,7 +360,7 @@ This is the `Host` dataset. ### Vulnerability -This is the `Vulnerability` dataset. +This is the `vulnerability` dataset. #### Example diff --git a/packages/crowdstrike/changelog.yml b/packages/crowdstrike/changelog.yml index 9b37d6f7238..a14b7756af8 100644 --- a/packages/crowdstrike/changelog.yml +++ b/packages/crowdstrike/changelog.yml @@ -1,4 +1,9 @@ # newer versions go on top +- version: "2.8.1" + changes: + - description: Update the CrowdStrike Integration documentation. + type: enhancement + link: https://github.com/elastic/integrations/pull/15927 - version: "2.8.0" changes: - description: Add support for HTTP proxy configuration for Event Streams. Add support for proxy header configuration for CrowdStrike APIs. diff --git a/packages/crowdstrike/data_stream/falcon/_dev/test/policy/test-default.expected b/packages/crowdstrike/data_stream/falcon/_dev/test/policy/test-default.expected index d040fc86f21..71a83cd255c 100644 --- a/packages/crowdstrike/data_stream/falcon/_dev/test/policy/test-default.expected +++ b/packages/crowdstrike/data_stream/falcon/_dev/test/policy/test-default.expected @@ -9,7 +9,6 @@ inputs: - allow_deprecated_use: true data_stream: dataset: crowdstrike.falcon - type: logs exclude_files: - \.gz$ multiline.match: after diff --git a/packages/crowdstrike/data_stream/falcon/_dev/test/policy/test-streaming.expected b/packages/crowdstrike/data_stream/falcon/_dev/test/policy/test-streaming.expected index 47e9fbc7edd..76892c8579b 100644 --- a/packages/crowdstrike/data_stream/falcon/_dev/test/policy/test-streaming.expected +++ b/packages/crowdstrike/data_stream/falcon/_dev/test/policy/test-streaming.expected @@ -13,7 +13,6 @@ inputs: crowdstrike_app_id: test_app_id data_stream: dataset: crowdstrike.falcon - type: logs processors: null program: | state.response.decode_json().as(body, { diff --git a/packages/crowdstrike/data_stream/falcon/manifest.yml b/packages/crowdstrike/data_stream/falcon/manifest.yml index 02925459151..44c99016710 100644 --- a/packages/crowdstrike/data_stream/falcon/manifest.yml +++ b/packages/crowdstrike/data_stream/falcon/manifest.yml @@ -1,5 +1,5 @@ type: logs -title: Crowdstrike falcon logs +title: CrowdStrike Falcon logs streams: - input: logfile enabled: false @@ -40,12 +40,12 @@ streams: Processors are used to reduce the number of fields in the exported event or to enhance the event with metadata. This executes in the agent before the logs are parsed. See [Processors](https://www.elastic.co/guide/en/beats/filebeat/current/filtering-and-enhancing-data.html) for details. template_path: log.yml.hbs - title: Crowdstrike falcon logs (log) - description: Collect Crowdstrike falcon logs using log input + title: CrowdStrike Falcon events + description: Collect CrowdStrike Falcon events through Falcon SIEM Connector. - input: streaming template_path: streaming.yml.hbs - title: CrowdStrike Falcon Logs - description: Collect Falcon logs from CrowdStrike Event Stream. + title: CrowdStrike Falcon events + description: Collect CrowdStrike Falcon events using Event Streams API. enabled: false vars: - name: url @@ -80,7 +80,7 @@ streams: - name: app_id type: text title: App ID - description: App ID for the CrowdStrike. + description: App ID is an alphanumeric string to identify an event stream. App IDs can have a maximum of 32 characters. App IDs must be unique to each active event stream. multi: false required: true show_user: true diff --git a/packages/crowdstrike/data_stream/fdr/manifest.yml b/packages/crowdstrike/data_stream/fdr/manifest.yml index ae80783f57c..5e8f0306626 100644 --- a/packages/crowdstrike/data_stream/fdr/manifest.yml +++ b/packages/crowdstrike/data_stream/fdr/manifest.yml @@ -11,7 +11,7 @@ streams: - input: aws-s3 template_path: aws-s3.yml.hbs title: Falcon Data Replicator logs - description: Collect Falcon Data Replicator logs using s3 input + description: Collect Falcon Data Replicator logs using AWS S3 and AWS SQS. enabled: false vars: - name: access_key_id @@ -226,7 +226,7 @@ streams: `event.timezone` and `log.offset`. - input: logfile title: Falcon Data Replicator logs - description: Collect Falcon Data Replicator logs using a log file + description: Collect Falcon Data Replicator logs through file system. enabled: false vars: - name: paths diff --git a/packages/crowdstrike/docs/README.md b/packages/crowdstrike/docs/README.md index f6a5ed62366..536de6c9c7b 100644 --- a/packages/crowdstrike/docs/README.md +++ b/packages/crowdstrike/docs/README.md @@ -1,58 +1,121 @@ # CrowdStrike Integration -The [CrowdStrike](https://www.crowdstrike.com/) integration allows you to easily connect your CrowdStrike Falcon platform to Elastic for seamless onboarding of alerts and telemetry from CrowdStrike Falcon and Falcon Data Replicator. Elastic Security can leverage this data for security analytics including correlation, visualization and incident response. It provides support using four different modes for integrating CrowdStrike to the Elastic: +## Overview -1. **Falcon SIEM Connector**: This is a pre-built integration designed to connect CrowdStrike Falcon with Security Information and Event Management (SIEM) systems. It streamlines the flow of security data from CrowdStrike Falcon to the SIEM, providing a standardized and structured way of feeding information into the SIEM platform. It includes the following datasets for receiving logs: +The [CrowdStrike](https://www.crowdstrike.com/) integration allows you to easily connect your CrowdStrike Falcon platform to Elastic for seamless onboarding of alerts and telemetry from CrowdStrike Falcon and Falcon Data Replicator. Elastic Security can leverage this data for security analytics including correlation, visualization and incident response. -- `falcon` dataset: consists of endpoint data and Falcon platform audit data forwarded from Falcon SIEM Connector. +### Compatibility - **Log File Format and Location** +This integration is compatible with CrowdStrike Falcon SIEM-Connector-v2.0, REST API, and CrowdStrike Event Streams API. +For Rest API support, this module has been tested against the **CrowdStrike API Version v1/v2**. - The CrowdStrike integration only supports JSON output format from the SIEM Connector. +### How it works - - Log files are written to multiple rotated output files based on the `output_path` setting in the `cs.falconhoseclient.cfg` file. - - The default output location for the Falcon SIEM Connector is `/var/log/crowdstrike/falconhoseclient/output`. - - By default, files named `output*` in `/var/log/crowdstrike/falconhoseclient` directory contain valid JSON event data and should be used as the source for ingestion. +The integration collects and ingests events from multiple CrowdStrike Falcon data sources into Elasticsearch for security analysis and visualization. - >Note: Files with names like `cs.falconhoseclient-*.log` in the same directory are primarily used for logging internal operations of the Falcon SIEM Connector and are not intended to be consumed by this integration. +![CrowdStrike Integration Flowchart](../img/crowdstrike-flowchart.png) -2. **CrowdStrike REST API**: This provides a programmatic interface to interact with the CrowdStrike Falcon platform. It allows users to perform various operations such as querying information about unified alerts and hosts/devices. It includes the following datasets for receiving logs: +1. **Falcon SIEM Connector**: -- `alert` dataset: It is typically used to retrieve detailed information about unified alerts generated by the CrowdStrike Falcon platform, via Falcon Intelligence Alert API - `/alerts/combined/alerts/v1`. + The Falcon SIEM Connector is a pre-built integration designed to connect CrowdStrike Falcon with Security Information and Event Management (SIEM) systems. It streamlines the flow of security data from CrowdStrike Falcon to the SIEM, providing a standardized and structured way of feeding information into the SIEM platform. The SIEM Connector collects event streams data and sends it to your SIEM. -- `host` dataset: It retrieves all the hosts/devices in your environment providing information such as device metadata, configuration, and status generated by the CrowdStrike Falcon platform, via Falcon Intelligence Host/Device API - `/devices/combined/devices/v1`. It is more focused to provide the management and monitoring information of devices such as login details, status, policies, configuration etc. + Events received from the SIEM Connector are indexed into the `falcon` dataset in Elasticsearch. -- `vulnerability` dataset: It retrieves all the vulnerabilities in your environment, providing information such as severity, status, confidence levels, remediation guidance, and affected hosts, as detected by the CrowdStrike Falcon platform, via the Falcon Spotlight Vulnerability API - `/spotlight/combined/vulnerabilities/v1`. +2. **CrowdStrike Event Streams API**: -3. **Falcon Data Replicator**: This Collect events in near real time from your endpoints and cloud workloads, identities and data. CrowdStrike Falcon Data Replicator (FDR) enables you with actionable insights to improve SOC performance. FDR contains near real-time data collected by the Falcon platform's single, lightweight agent. It includes the following datasets for receiving logs: + The Event Streams API continuously streams security logs from CrowdStrike Falcon, including authentication activity, cloud security posture management (CSPM), firewall logs, user activity, and XDR data. It captures real-time security events like user logins, cloud environment changes, network traffic, and advanced threat detections. The streaming integration provides continuous monitoring and analysis for proactive threat detection. It enhances visibility into user behavior, network security, and overall system health. This setup enables faster response capabilities to emerging security incidents. -- `fdr` dataset: consists of logs forwarded using the [Falcon Data Replicator](https://github.com/CrowdStrike/FDR). + Events retrieved from the Event Streams API are indexed into the `falcon` dataset in Elasticsearch. -4. **CrowdStrike Event Stream**: This streams security logs from CrowdStrike Event Stream, including authentication activity, cloud security posture management (CSPM), firewall logs, user activity, and XDR data. It captures real-time security events like user logins, cloud environment changes, network traffic, and advanced threat detections. The streaming integration provides continuous monitoring and analysis for proactive threat detection. It enhances visibility into user behavior, network security, and overall system health. This setup enables faster response capabilities to emerging security incidents. It includes the following datasets for receiving logs: +3. **CrowdStrike REST API**: -- `falcon` dataset: consists of streaming data forwarded from CrowdStrike Event Stream. + This provides a programmatic interface to interact with the CrowdStrike Falcon platform. It allows users to perform various operations such as querying information about unified alerts, hosts/devices and vulnerabilities. + + It includes the following datasets for receiving logs: + - `alert` dataset: It is typically used to retrieve detailed information about unified alerts generated by the CrowdStrike Falcon platform, via Falcon Intelligence Alert API - `/alerts/combined/alerts/v1`. -## Compatibility + - `host` dataset: It retrieves all the hosts/devices in your environment providing information such as device metadata, configuration, and status generated by the CrowdStrike Falcon platform, via Falcon Intelligence Host/Device API - `/devices/combined/devices/v1`. It is more focused to provide the management and monitoring information of devices such as login details, status, policies, configuration etc. -This integration is compatible with CrowdStrike Falcon SIEM-Connector-v2.0, REST API, and CrowdStrike Event Streaming. -For Rest API support, this module has been tested against the **CrowdStrike API Version v1/v2**. + - `vulnerability` dataset: It retrieves all the vulnerabilities in your environment, providing information such as severity, status, confidence levels, remediation guidance, and affected hosts, as detected by the CrowdStrike Falcon platform, via the Falcon Spotlight Vulnerability API - `/spotlight/combined/vulnerabilities/v1`. -## Requirements +4. **Falcon Data Replicator (FDR)**: -### Agentless enabled integration -Agentless integrations allow you to collect data without having to manage Elastic Agent in your cloud. They make manual agent deployment unnecessary, so you can focus on your data instead of the agent that collects it. For more information, refer to [Agentless integrations](https://www.elastic.co/guide/en/serverless/current/security-agentless-integrations.html) and the [Agentless integrations FAQ](https://www.elastic.co/guide/en/serverless/current/agentless-integration-troubleshooting.html). + The FDR feed consists of regular transfers of data (data dumps) rather than ongoing streams of data from your endpoints, cloud workloads, identities, via the Falcon platform’s lightweight agent. CrowdStrike Falcon Data Replicator (FDR) enables you with actionable insights to improve SOC performance. FDR isn't useful for real-time alerts because it's not an ongoing stream of data. -Agentless deployments are only supported in Elastic Serverless and Elastic Cloud environments. This functionality is in beta and is subject to change. Beta features are not subject to the support SLA of official GA features. + Logs received from the Falcon Data Replicator are indexed into the `fdr` dataset in Elasticsearch. -### Agent based installation +## What data does this integration collect? -Elastic Agent must be installed. For more details, check the Elastic Agent [installation instructions](docs-content://reference/fleet/install-elastic-agents.md). -You can install only one Elastic Agent per host. -Elastic Agent is required to stream data from the GCP Pub/Sub or REST API and ship the data to Elastic, where the events will then be processed via the integration's ingest pipelines. +This integration collects: +- **Sensor events** — Endpoint telemetry generated by the Falcon sensor installed on your hosts. +- **Cloud events** — Non-sensor events generated in the CrowdStrike cloud, such as detection summaries, cloud security posture (CSPM) findings, and other platform activity. +- **Detections and automated leads** — Unified detection events and automated threat leads triggered in the Falcon console. +- **Host inventory** — Information about all registered hosts and devices, including configuration, policy, and operational details. +- **Vulnerability data** — Insights into detected vulnerabilities with severity, affected assets, and remediation details. + +## What do I need to use this integration? + +This section describes the requirements and configuration details for each supported data source. + +### Collect data via CrowdStrike Falcon SIEM Connector + +To collect data using the Falcon SIEM Connector, you need the file path where the connector stores event data received from the Event Streams. +This is same as the `output_path` setting in the `cs.falconhoseclient.cfg` configuration file. + +The integration supports only JSON output format from the Falcon SIEM Connector. Other formats such as Syslog and CEF are not supported. + +Additionally, this integration collect logs only through the file system. Ingestion via a Syslog server is not supported. + +:::{note} +The log files are written to multiple rotated output files based on the `output_path` setting in the `cs.falconhoseclient.cfg` file. The default output location for the Falcon SIEM Connector is `/var/log/crowdstrike/falconhoseclient/output`. +By default, files named `output*` in `/var/log/crowdstrike/falconhoseclient` directory contain valid JSON event data and should be used as the source for ingestion. + +Files with names like `cs.falconhoseclient-*.log` in the same directory are primarily used for logging internal operations of the Falcon SIEM Connector and are not intended to be consumed by this integration. +::: + +By default, the configuration file for the Falcon SIEM Connector is located at `/opt/crowdstrike/etc/cs.falconhoseclient.cfg` which provides configuration options related to the events collected by Falcon SIEM Connector. + +Parts of the configuration file called `EventTypeCollection` and `EventSubTypeCollection` provides a list of event types that the connector should collect. + +Current supported event types are: +- DataProtectionDetectionSummaryEvent +- DetectionSummaryEvent +- EppDetectionSummaryEvent +- IncidentSummaryEvent +- UserActivityAuditEvent +- AuthActivityAuditEvent +- FirewallMatchEvent +- RemoteResponseSessionStartEvent +- RemoteResponseSessionEndEvent +- CSPM Streaming events +- CSPM Search events +- IDP Incidents +- IDP Summary events +- Mobile Detection events +- Recon Notification events +- XDR Detection events +- Scheduled Report Notification events + +### Collect data via CrowdStrike Event Streams + +The following parameters from your CrowdStrike instance are required: + +1. Client ID +2. Client Secret +3. Token URL +4. API Endpoint URL +5. CrowdStrike App ID +6. Required scopes for event streams: + + | Data Stream | Scope | + | ------------- | ------------------- | + | Event Stream | read: Event streams | -## Setup +:::{note} +You can use the Falcon SIEM Connector as an alternative to the event streams API. +::: -### Collect data from CrowdStrike REST API +### Collect data via CrowdStrike REST API The following parameters from your CrowdStrike instance are required: @@ -60,7 +123,7 @@ The following parameters from your CrowdStrike instance are required: 2. Client Secret 3. Token url 4. API Endpoint url -5. Required scopes for each data stream : +5. Required scopes for each data stream: | Data Stream | Scope | | ------------- | ------------- | @@ -68,26 +131,196 @@ The following parameters from your CrowdStrike instance are required: | Host | read:host | | Vulnerability | read:vulnerability | -### Collect data from CrowdStrike Event Stream +### Collect data via CrowdStrike Falcon Data Replicator (FDR) -The following parameters from your CrowdStrike instance are required: +The CrowdStrike Falcon Data Replicator (FDR) allows CrowdStrike users to replicate FDR data from CrowdStrike +managed S3 buckets. CrowdStrike writes notification events to a CrowdStrike managed SQS queue when new data is +available in S3. -1. Client ID -2. Client Secret -3. Token URL -4. API Endpoint URL -5. CrowdStrike App ID -6. Required scopes for event stream: +This integration can be used in two ways. It can consume SQS notifications directly from the CrowdStrike managed +SQS queue or it can be used in conjunction with the FDR tool that replicates the data to a self-managed S3 bucket +and the integration can read from there. - | Data Stream | Scope | - | ------------- | ------------------- | - | Event Stream | read: Event streams | +In both cases SQS messages are deleted after they are processed. This allows you to operate more than one Elastic +Agent with this integration if needed and not have duplicate events, but it means you cannot ingest the data a second time. + +#### Use with CrowdStrike managed S3/SQS + +This is the simplest way to setup the integration, and also the default. + +You need to set the integration up with the SQS queue URL provided by CrowdStrike FDR. + +#### Use with FDR tool and data replicated to a self-managed S3 bucket + +This option can be used if you want to archive the raw CrowdStrike data. + +You need to follow the steps below: + +- Create a S3 bucket to receive the logs. +- Create a SQS queue. +- Configure your S3 bucket to send object created notifications to your SQS queue. +- Follow the [FDR tool](https://github.com/CrowdStrike/FDR) instructions to replicate data to your own S3 bucket. +- Configure the integration to read from your self-managed SQS topic. + +:::{note} +While the FDR tool can replicate the files from S3 to your local file system, this integration cannot read those files because they are gzip compressed, and the log file input does not support reading compressed files. +::: + +#### Configuration for the S3 input + +AWS credentials are required for running this integration if you want to use the S3 input. + +##### Configuration parameters +* `access_key_id`: first part of access key. +* `secret_access_key`: second part of access key. +* `session_token`: required when using temporary security credentials. +* `credential_profile_name`: profile name in shared credentials file. +* `shared_credential_file`: directory of the shared credentials file. +* `endpoint`: URL of the entry point for an AWS web service. +* `role_arn`: AWS IAM Role to assume. + +##### Credential Types +There are three types of AWS credentials can be used: + +- access keys, +- temporary security credentials, and +- IAM role ARN. + +##### Access keys + +`AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` are the two parts of access keys. +They are long-term credentials for an IAM user, or the AWS account root user. +Please see [AWS Access Keys and Secret Access Keys](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys) +for more details. + +##### Temporary security credentials + +Temporary security credentials has a limited lifetime and consists of an +access key ID, a secret access key, and a security token which typically returned +from `GetSessionToken`. + +MFA-enabled IAM users would need to submit an MFA code +while calling `GetSessionToken`. `default_region` identifies the AWS Region +whose servers you want to send your first API request to by default. + +This is typically the Region closest to you, but it can be any Region. Please see +[Temporary Security Credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html) +for more details. + +`sts get-session-token` AWS CLI can be used to generate temporary credentials. +For example. with MFA-enabled: +```js +aws> sts get-session-token --serial-number arn:aws:iam::1234:mfa/your-email@example.com --duration-seconds 129600 --token-code 123456 +``` + +Because temporary security credentials are short term, after they expire, the +user needs to generate new ones and manually update the package configuration in +order to continue collecting `aws` metrics. + +This will cause data loss if the configuration is not updated with new credentials before the old ones expire. + +##### IAM role ARN + +An IAM role is an IAM identity that you can create in your account that has +specific permissions that determine what the identity can and cannot do in AWS. + +A role does not have standard long-term credentials such as a password or access +keys associated with it. Instead, when you assume a role, it provides you with +temporary security credentials for your role session. +IAM role Amazon Resource Name (ARN) can be used to specify which AWS IAM role to assume to generate +temporary credentials. + +Please see [AssumeRole API documentation](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html) for more details. + +##### Supported Formats +1. Use access keys: Access keys include `access_key_id`, `secret_access_key` +and/or `session_token`. +2. Use `role_arn`: `role_arn` is used to specify which AWS IAM role to assume + for generating temporary credentials. + If `role_arn` is given, the package will check if access keys are given. + If not, the package will check for credential profile name. + If neither is given, default credential profile will be used. + + Please make sure credentials are given under either a credential profile or + access keys. +3. Use `credential_profile_name` and/or `shared_credential_file`: + If `access_key_id`, `secret_access_key` and `role_arn` are all not given, then + the package will check for `credential_profile_name`. + If you use different credentials for different tools or applications, you can use profiles to + configure multiple access keys in the same configuration file. + If there is no `credential_profile_name` given, the default profile will be used. + `shared_credential_file` is optional to specify the directory of your shared + credentials file. + If it's empty, the default directory will be used. + In Windows, shared credentials file is at `C:\Users\\.aws\credentials`. + For Linux, macOS or Unix, the file locates at `~/.aws/credentials`. + Please see[Create Shared Credentials File](https://docs.aws.amazon.com/ses/latest/DeveloperGuide/create-shared-credentials-file.html) + for more details. + +## How do I deploy this integration? + +1. In Kibana, go to **Management > Integrations**. +2. In the "Search for integrations" search bar, type **CrowdStrike**. +3. Click the **CrowdStrike** integration from the search results. +4. Click the **Add CrowdStrike** button to add the integration. +5. Configure the integration. +6. Click **Save and Continue** to save the integration. + +### Agentless enabled integration + +Agentless integrations allow you to collect data without having to manage Elastic Agent in your cloud. They make manual agent deployment unnecessary, so you can focus on your data instead of the agent that collects it. For more information, refer to [Agentless integrations](https://www.elastic.co/guide/en/serverless/current/security-agentless-integrations.html) and the [Agentless integrations FAQ](https://www.elastic.co/guide/en/serverless/current/agentless-integration-troubleshooting.html). + +Agentless deployments are only supported in Elastic Serverless and Elastic Cloud environments. This functionality is in beta and is subject to change. Beta features are not subject to the support SLA of official GA features. + +### Agent based installation + +Elastic Agent must be installed. For more details, check the Elastic Agent [installation instructions](docs-content://reference/fleet/install-elastic-agents.md). +You can install only one Elastic Agent per host. +Elastic Agent is required to stream data from the GCP Pub/Sub or REST API and ship the data to Elastic, where the events will then be processed via the integration's ingest pipelines. + +## Troubleshooting + +### Vulnerability API returns 404 Not found + +This error may occur for the following reasons: +1. Too many records in the response. +2. The pagination token has expired. Tokens expire 120 seconds after a call is made. + +To resolve this, adjust the `Batch Size` setting in the integration to reduce the number of records returned per pagination call. + +### Duplicate Events + +The option `Enable Data Deduplication` allows you to avoid consuming duplicate events. By default, this option is set to `false`, and so duplicate events may be ingested. When this option is enabled, a [fingerprint processor](https://www.elastic.co/guide/en/elasticsearch/reference/current/fingerprint-processor.html) is used to calculate a hash from a set of Crowdstrike fields that uniquely identifies the event. The hash is assigned to the Elasticsearch [`_id`](https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-id-field.html) field that makes the document unique, thus avoiding duplicates. + +If duplicate events are ingested, to help find them, the integration `event.id` field is populated by concatenating a few Crowdstrike fields that uniquely identifies the event. These fields are `id`, `aid`, and `cid` from the Crowdstrike event. The fields are separated with pipe `|`. +For example, if your Crowdstrike event contains `id: 123`, `aid: 456`, and `cid: 789` then the `event.id` would be `123|456|789`. + +### Alert severity mapping + +The values used in `event.severity` are consistent with Elastic Detection Rules. + +| Severity Name | `event.severity` | +|----------------------------|:----------------:| +| Low, Info or Informational | 21 | +| Medium | 47 | +| High | 73 | +| Critical | 99 | + +If the severity name is not available from the original document, it is determined from the numeric severity value according to the following table. + +| Crowdstrike Severity | Severity Name | +|------------------------|:-------------:| +| 0 - 19 | info | +| 20 - 39 | low | +| 40 - 59 | medium | +| 60 - 79 | high | +| 80 - 100 | critical | ## Logs ### Alert -This is the `Alert` dataset. +This is the `alert` dataset. #### Example @@ -754,32 +987,9 @@ An example event for `alert` looks as following: ### Falcon -Contains endpoint data and CrowdStrike Falcon platform audit data forwarded from Falcon SIEM Connector. +This is the `falcon` dataset. -#### Falcon SIEM Connector configuration file - -By default, the configuration file located at `/opt/crowdstrike/etc/cs.falconhoseclient.cfg` provides configuration options related to the events collected by Falcon SIEM Connector. - -Parts of the configuration file called `EventTypeCollection` and `EventSubTypeCollection` provides a list of event types that the connector should collect. - -Current supported event types are: -- DataProtectionDetectionSummaryEvent -- DetectionSummaryEvent -- EppDetectionSummaryEvent -- IncidentSummaryEvent -- UserActivityAuditEvent -- AuthActivityAuditEvent -- FirewallMatchEvent -- RemoteResponseSessionStartEvent -- RemoteResponseSessionEndEvent -- CSPM Streaming events -- CSPM Search events -- IDP Incidents -- IDP Summary events -- Mobile Detection events -- Recon Notification events -- XDR Detection events -- Scheduled Report Notification events +#### Example **Exported fields** @@ -1263,167 +1473,7 @@ An example event for `falcon` looks as following: ### FDR -The CrowdStrike Falcon Data Replicator (FDR) allows CrowdStrike users to replicate FDR data from CrowdStrike -managed S3 buckets. CrowdStrike writes notification events to a CrowdStrike managed SQS queue when new data is -available in S3. - -This integration can be used in two ways. It can consume SQS notifications directly from the CrowdStrike managed -SQS queue or it can be used in conjunction with the FDR tool that replicates the data to a self-managed S3 bucket -and the integration can read from there. - -In both cases SQS messages are deleted after they are processed. This allows you to operate more than one Elastic -Agent with this integration if needed and not have duplicate events, but it means you cannot ingest the data a second time. - -#### Use with CrowdStrike managed S3/SQS - -This is the simplest way to setup the integration, and also the default. - -You need to set the integration up with the SQS queue URL provided by Crowdstrike FDR. -Ensure the `Is FDR queue` option is enabled. - -#### Use with FDR tool and data replicated to a self-managed S3 bucket - -This option can be used if you want to archive the raw CrowdStrike data. - -You need to follow the steps below: - -- Create a S3 bucket to receive the logs. -- Create a SQS queue. -- Configure your S3 bucket to send object created notifications to your SQS queue. -- Follow the [FDR tool](https://github.com/CrowdStrike/FDR) instructions to replicate data to your own S3 bucket. -- Configure the integration to read from your self-managed SQS topic. -- Disable the `Is FDR queue` option in the integration. - -> NOTE: While the FDR tool can replicate the files from S3 to your local file system, this integration cannot read those files because they are gzip compressed, and the log file input does not support reading compressed files. - -#### Configuration for the S3 input - -AWS credentials are required for running this integration if you want to use the S3 input. - -##### Configuration parameters -* `access_key_id`: first part of access key. -* `secret_access_key`: second part of access key. -* `session_token`: required when using temporary security credentials. -* `credential_profile_name`: profile name in shared credentials file. -* `shared_credential_file`: directory of the shared credentials file. -* `endpoint`: URL of the entry point for an AWS web service. -* `role_arn`: AWS IAM Role to assume. - -##### Credential Types -There are three types of AWS credentials can be used: - -- access keys, -- temporary security credentials, and -- IAM role ARN. - -##### Access keys - -`AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` are the two parts of access keys. -They are long-term credentials for an IAM user, or the AWS account root user. -Please see [AWS Access Keys and Secret Access Keys](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys) -for more details. - -##### Temporary security credentials - -Temporary security credentials has a limited lifetime and consists of an -access key ID, a secret access key, and a security token which typically returned -from `GetSessionToken`. - -MFA-enabled IAM users would need to submit an MFA code -while calling `GetSessionToken`. `default_region` identifies the AWS Region -whose servers you want to send your first API request to by default. - -This is typically the Region closest to you, but it can be any Region. Please see -[Temporary Security Credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html) -for more details. - -`sts get-session-token` AWS CLI can be used to generate temporary credentials. -For example. with MFA-enabled: -```js -aws> sts get-session-token --serial-number arn:aws:iam::1234:mfa/your-email@example.com --duration-seconds 129600 --token-code 123456 -``` - -Because temporary security credentials are short term, after they expire, the -user needs to generate new ones and manually update the package configuration in -order to continue collecting `aws` metrics. - -This will cause data loss if the configuration is not updated with new credentials before the old ones expire. - -##### IAM role ARN - -An IAM role is an IAM identity that you can create in your account that has -specific permissions that determine what the identity can and cannot do in AWS. - -A role does not have standard long-term credentials such as a password or access -keys associated with it. Instead, when you assume a role, it provides you with -temporary security credentials for your role session. -IAM role Amazon Resource Name (ARN) can be used to specify which AWS IAM role to assume to generate -temporary credentials. - -Please see [AssumeRole API documentation](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html) for more details. - -##### Supported Formats -1. Use access keys: Access keys include `access_key_id`, `secret_access_key` -and/or `session_token`. -2. Use `role_arn`: `role_arn` is used to specify which AWS IAM role to assume - for generating temporary credentials. - If `role_arn` is given, the package will check if access keys are given. - If not, the package will check for credential profile name. - If neither is given, default credential profile will be used. - - Please make sure credentials are given under either a credential profile or - access keys. -3. Use `credential_profile_name` and/or `shared_credential_file`: - If `access_key_id`, `secret_access_key` and `role_arn` are all not given, then - the package will check for `credential_profile_name`. - If you use different credentials for different tools or applications, you can use profiles to - configure multiple access keys in the same configuration file. - If there is no `credential_profile_name` given, the default profile will be used. - `shared_credential_file` is optional to specify the directory of your shared - credentials file. - If it's empty, the default directory will be used. - In Windows, shared credentials file is at `C:\Users\\.aws\credentials`. - For Linux, macOS or Unix, the file locates at `~/.aws/credentials`. - Please see[Create Shared Credentials File](https://docs.aws.amazon.com/ses/latest/DeveloperGuide/create-shared-credentials-file.html) - for more details. - -#### Troubleshooting - -##### Vulnerability API returns 404 Not found - -This error may occur for the following reasons: -1. Too many records in the response. -2. The pagination token has expired. Tokens expire 120 seconds after a call is made. - -To resolve this, adjust the `Batch Size` setting in the integration to reduce the number of records returned per pagination call. - -##### Duplicate Events - -The option `Enable Data Deduplication` allows you to avoid consuming duplicate events. By default, this option is set to `false`, and so duplicate events may be ingested. When this option is enabled, a [fingerprint processor](https://www.elastic.co/guide/en/elasticsearch/reference/current/fingerprint-processor.html) is used to calculate a hash from a set of Crowdstrike fields that uniquely identifies the event. The hash is assigned to the Elasticsearch [`_id`](https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-id-field.html) field that makes the document unique, thus avoiding duplicates. - -If duplicate events are ingested, to help find them, the integration `event.id` field is populated by concatenating a few Crowdstrike fields that uniquely identifies the event. These fields are `id`, `aid`, and `cid` from the Crowdstrike event. The fields are separated with pipe `|`. -For example, if your Crowdstrike event contains `id: 123`, `aid: 456`, and `cid: 789` then the `event.id` would be `123|456|789`. - -#### Alert severity mapping - -The values used in `event.severity` are consistent with Elastic Detection Rules. - -| Severity Name | `event.severity` | -|----------------------------|:----------------:| -| Low, Info or Informational | 21 | -| Medium | 47 | -| High | 73 | -| Critical | 99 | - -If the severity name is not available from the original document, it is determined from the numeric severity value according to the following table. - -| Crowdstrike `severity` | Severity Name | -|------------------------|:-------------:| -| 0 - 19 | info | -| 20 - 39 | low | -| 40 - 59 | medium | -| 60 - 79 | high | -| 80 - 100 | critical | +This is the `fdr` dataset. #### Example @@ -2682,7 +2732,7 @@ An example event for `fdr` looks as following: ### Host -This is the `Host` dataset. +This is the `host` dataset. #### Example @@ -3025,7 +3075,7 @@ An example event for `host` looks as following: ### Vulnerability -This is the `Vulnerability` dataset. +This is the `vulnerability` dataset. #### Example diff --git a/packages/crowdstrike/img/crowdstrike-flowchart.png b/packages/crowdstrike/img/crowdstrike-flowchart.png new file mode 100644 index 00000000000..f98e78a2b8e Binary files /dev/null and b/packages/crowdstrike/img/crowdstrike-flowchart.png differ diff --git a/packages/crowdstrike/manifest.yml b/packages/crowdstrike/manifest.yml index e358b5d5f16..277543fc816 100644 --- a/packages/crowdstrike/manifest.yml +++ b/packages/crowdstrike/manifest.yml @@ -1,7 +1,7 @@ name: crowdstrike title: CrowdStrike -version: "2.8.0" -description: Collect logs from Crowdstrike with Elastic Agent. +version: "2.8.1" +description: Collect logs from CrowdStrike with Elastic Agent. type: integration format_version: "3.4.0" categories: [security, edr_xdr] @@ -49,7 +49,7 @@ screenshots: policy_templates: - name: crowdstrike title: CrowdStrike - description: Collect logs from CrowdStrike Falcon and FDR + description: Collect logs from CrowdStrike Falcon deployment_modes: default: enabled: true @@ -60,14 +60,14 @@ policy_templates: team: security-service-integrations inputs: - type: logfile - title: "Collect CrowdStrike Falcon and FDR logs (input: logfile)" - description: "Collecting logs from CrowdStrike Falcon and FDR (input: logfile)" + title: Collect Falcon events and Falcon Data Replicator logs through file system + description: Collecting logs from Falcon SIEM Connector and Falcon Data Replicator through file system. - type: aws-s3 - title: "Collect CrowdStrike Falcon Data Replicator logs (input: aws-s3)" - description: "Collecting logs from CrowdStrike Falcon Data Replicator (input: aws-s3)" + title: Collect Falcon Data Replicator logs using AWS S3 + description: Collecting logs from Falcon Data Replicator using AWS S3. - type: streaming - title: Collect CrowdStrike Falcon Logs via Event Stream - description: Collecting CrowdStrike Falcon Logs via Event Stream. + title: Collect CrowdStrike Falcon Logs using Event Streams + description: Collecting CrowdStrike Falcon Logs using Event Streams. vars: - name: proxy_url type: text @@ -84,8 +84,8 @@ policy_templates: show_user: false description: This specifies the headers to be sent to the proxy server. - type: cel - title: Collect CrowdStrike logs via API - description: Collecting CrowdStrike logs via API. + title: Collect CrowdStrike Falcon Alerts, Hosts and Vulnerabilities + description: Collect CrowdStrike Falcon Alerts, Hosts and Vulnerabilities. vars: - name: client_id type: text