Skip to content

Commit

Permalink
Fixing incident alerts and artifacts is not populated (#33558)
Browse files Browse the repository at this point in the history
* reproduce test case

* fix

* fix rl

* adding a unit test that fail using the new incident format

* adding the fix

* removing logs

* update

* update

* update

* update

* [Marketplace Contribution] NetskopeV2 - Content Pack Update (#33549)

* [Marketplace Contribution] NetskopeV2 - Content Pack Update (#33527)

* "contribution update to pack 'NetskopeV2'"

* Update 1_0_3.md

* remove empty display

* Remove duplicate API Key parameter in table

---------

Co-authored-by: Randy Baldwin <32545292+randomizerxd@users.noreply.github.com>

* Update Packs/NetskopeV2/ReleaseNotes/1_0_3.md

---------

Co-authored-by: xsoar-bot <67315154+xsoar-bot@users.noreply.github.com>
Co-authored-by: Randy Baldwin <32545292+randomizerxd@users.noreply.github.com>
Co-authored-by: merit-maita <49760643+merit-maita@users.noreply.github.com>

* Ciac 985 qradar (#33239)

* Add ID argument to QRadar_V3 qradar_log_sources_list

* remove redundant parantheses

* Add qradar-event-collectors-list command to QRadar_V3

* Add wincollect-destinations-list command to QRadar_V3

* Add qradar-disconnected-log-collectors-list command to QRadar_V3

* Fix command description on qradar-disconnected-log-collectors-list

* Start building log-source-types command in QRadar_v3

* Build log-source-types-list command on QRadar_v3

* Build log-source-extensions-list command on QRadar_v3

* Build log-source-languages-list command on QRadar_v3

* Build log-source-groups-list command on QRadar_v3

* Remove unnecessary field from log-source-types HR on QRadar_V3

* Add qradar-log-source-protocol-types command to QRadar_V3

* Add qradar-log-source-delete command to QRadar_V3

* Add qradar-log-source-create command to QRadar_V3

* Clean qradar-log-source-create command

* add qradar-log-source-update command to QRadar_v3 and make some bug fixes to old commands

* start writing tests

* checkout

* Address CR

* Update Packs/QRadar/Integrations/QRadar_v3/QRadar_v3.yml

Co-authored-by: Judah Schwartz <JudahSchwartz@users.noreply.github.com>

* Address CR

* Add commands to playbook and fix bugs

* Fix playbook

* Menually merge master tpb

* merge in master

* checkout

* fix pre commit errors

* address pre-commit issues

* address pre-commit issues

* checkout

* checkout

* Bump pack from version QRadar to 2.4.52.

* checkout

* Remove map_raw_to_labels parameter from qradar settings

* Update Packs/QRadar/Integrations/QRadar_v3/QRadar_v3.yml

Co-authored-by: ShirleyDenkberg <62508050+ShirleyDenkberg@users.noreply.github.com>

* Update Packs/QRadar/Integrations/QRadar_v3/QRadar_v3.yml

Co-authored-by: ShirleyDenkberg <62508050+ShirleyDenkberg@users.noreply.github.com>

* add timeout to qradar events polling

* Bump pack from version QRadar to 2.4.53.

* checkout

* raise qradar timeout

* checkout

* remove timeout parameter from qradar-search-retrieve-events command

* make qradar-log-source-delete not crash when deleting non-existing id

* Update Packs/QRadar/Integrations/QRadar_v3/QRadar_v3.yml

Co-authored-by: ShirleyDenkberg <62508050+ShirleyDenkberg@users.noreply.github.com>

* Update Packs/QRadar/Integrations/QRadar_v3/QRadar_v3.yml

Co-authored-by: ShirleyDenkberg <62508050+ShirleyDenkberg@users.noreply.github.com>

* Update Packs/QRadar/Integrations/QRadar_v3/QRadar_v3.yml

Co-authored-by: ShirleyDenkberg <62508050+ShirleyDenkberg@users.noreply.github.com>

* Update Packs/QRadar/Integrations/QRadar_v3/QRadar_v3.yml

Co-authored-by: ShirleyDenkberg <62508050+ShirleyDenkberg@users.noreply.github.com>

* Update Packs/QRadar/Integrations/QRadar_v3/QRadar_v3.yml

Co-authored-by: ShirleyDenkberg <62508050+ShirleyDenkberg@users.noreply.github.com>

* Update Packs/QRadar/Integrations/QRadar_v3/QRadar_v3.yml

Co-authored-by: ShirleyDenkberg <62508050+ShirleyDenkberg@users.noreply.github.com>

* address doc review

* address doc review

* restore pre-commit and update command examples

* address lint issues

* address pre-commit errors

* address pre-commit errors

* address Juda's CR

* Update Packs/QRadar/Integrations/QRadar_v3/QRadar_v3.yml

Co-authored-by: ShirleyDenkberg <62508050+ShirleyDenkberg@users.noreply.github.com>

* Update Packs/QRadar/Integrations/QRadar_v3/QRadar_v3.yml

Co-authored-by: ShirleyDenkberg <62508050+ShirleyDenkberg@users.noreply.github.com>

* Update Packs/QRadar/Integrations/QRadar_v3/QRadar_v3.yml

Co-authored-by: ShirleyDenkberg <62508050+ShirleyDenkberg@users.noreply.github.com>

* Update Packs/QRadar/Integrations/QRadar_v3/QRadar_v3.yml

Co-authored-by: ShirleyDenkberg <62508050+ShirleyDenkberg@users.noreply.github.com>

* address doc review

* fix RN

* regenerate docs

* Update Packs/QRadar/ReleaseNotes/2_4_53.md

Co-authored-by: Dan Tavori <38749041+dantavori@users.noreply.github.com>

---------

Co-authored-by: Judah Schwartz <JudahSchwartz@users.noreply.github.com>
Co-authored-by: Content Bot <bot@demisto.com>
Co-authored-by: ShirleyDenkberg <62508050+ShirleyDenkberg@users.noreply.github.com>
Co-authored-by: Dan Tavori <38749041+dantavori@users.noreply.github.com>

* Update docker pcap (#33450)

* updated dockeר image

* rn

* Bump pack from version CommonScripts to 1.14.21.

* Bump pack from version CommonScripts to 1.14.22.

---------

Co-authored-by: Content Bot <bot@demisto.com>

* added validations to validation_config file (#33493)

* added validations to validation_config file

* fixes

* test

* changes

* fixes

* remove BA100

* adding back support_multithreading (#33542)

* adding back support_multithreading

* generate container id and add debug logs and RN

* fix UT

* RN

* add DEMISTO_SDK_GRAPH_FORCE_CREATE to validate in bucket upload (#33563)

* add DEMISTO_SDK_GRAPH_FORCE_CREATE to validate in bucket upload

* trigger build

* remove tmp file from repo (#33582)


force merge: accidental file added

* SplunkPy: documentation updates (#33565)

* update doc

* RN

* Apply suggestions from code review

Co-authored-by: ShirleyDenkberg <62508050+ShirleyDenkberg@users.noreply.github.com>

---------

Co-authored-by: ShirleyDenkberg <62508050+ShirleyDenkberg@users.noreply.github.com>

* [pre-commit] - skip some hooks on nightly (#33578)

* [pre-commit] - skip validate-deleted-files in nightly

* Empty-Commit

* init (#33577)

* Scheduled Task Sanitize (#33368)

* XSUP-34767 - add utf8bom to csv header when needed (#33567)

* XSUP-34767 - add utf8bom to csv header when needed

* [MicrosoftGraphIdentityandAccess] update permissions (#33564)

* update scopes

* Revert "update scopes"

This reverts commit b250caf.

* update scopes

* pre commit

* update desc

* Aws e2c create vpc endpoint (#33517)

* code, readme, tests

* code, readme, tests, rn

* fix

* pre-commit

* fix

* fix

* demo and pre commit

* known words

* CR

* CR

* test fix

* pre commit

* gitlab pre-commit not mandatoary (#33594)

force merge: making pre-commit not mandatory

* [MicrosoftCloudAppSecurity] Fix the fetch in XSOAR 8 (#33588)

* [MicrosoftCloudAppSecurity] Fix the fetch in XSOAR 8

* Update Packs/MicrosoftCloudAppSecurity/ReleaseNotes/2_1_58.md

Co-authored-by: Binat Ziser <89336697+bziser@users.noreply.github.com>

---------

Co-authored-by: Binat Ziser <89336697+bziser@users.noreply.github.com>

* [Mail Sender (New)] Fix for EML Files with ASCII Encoding Error" (#33417)

* fix

* test PB

* rn

* Update docker

* fix tpb

* Empty-Commit

* fix tpb

* pre-commit path validations  (#33589)

* add validate-content-paths hook

* fix name

* no need for nightly

* remove test file

* use three-dot-diff (#33599)

* removed DO105 (#33605)

* RedCanary: fix detection without relationship (#33593)

* fix wrong code

* fix test name

* fix pre commit

* Update README.md (#33543) (#33574)

Added note indicating why integration doesn't support REST API token.

Co-authored-by: gbouzar <113393855+gbouzar@users.noreply.github.com>

* poetry files (#33606)

Co-authored-by: Content Bot <bot@demisto.com>

* update

* update

* deleying file

* adjustments

* adjustments

* adding logs

* fix lambda

* removing logs

* removing logs

* fix unit test

* cr fixes

* cr fixes

* mypy fixes

* Update Packs/CortexXDR/ReleaseNotes/6_1_27.md

Co-authored-by: EyalPintzov <91007713+eyalpalo@users.noreply.github.com>

* Update Packs/CortexXDR/ReleaseNotes/6_1_27.md

Co-authored-by: EyalPintzov <91007713+eyalpalo@users.noreply.github.com>

* adding unit test

* unit test that repreduce XSUP-35253

* fixing bug

* Update CortexXDRIR_test.py

removing last unit test

* fixing bug

* fixing bug

* pre commit

---------

Co-authored-by: content-bot <55035720+content-bot@users.noreply.github.com>
Co-authored-by: xsoar-bot <67315154+xsoar-bot@users.noreply.github.com>
Co-authored-by: Randy Baldwin <32545292+randomizerxd@users.noreply.github.com>
Co-authored-by: merit-maita <49760643+merit-maita@users.noreply.github.com>
Co-authored-by: Tal Zichlinsky <35036457+talzich@users.noreply.github.com>
Co-authored-by: Judah Schwartz <JudahSchwartz@users.noreply.github.com>
Co-authored-by: Content Bot <bot@demisto.com>
Co-authored-by: ShirleyDenkberg <62508050+ShirleyDenkberg@users.noreply.github.com>
Co-authored-by: Dan Tavori <38749041+dantavori@users.noreply.github.com>
Co-authored-by: Yuval Cohen <86777474+yucohen@users.noreply.github.com>
Co-authored-by: Yuval Hayun <70104171+YuvHayun@users.noreply.github.com>
Co-authored-by: JudithB <132264628+jbabazadeh@users.noreply.github.com>
Co-authored-by: ilaner <88267954+ilaner@users.noreply.github.com>
Co-authored-by: Israel Lappe <79846863+ilappe@users.noreply.github.com>
Co-authored-by: Guy Afik <53861351+GuyAfik@users.noreply.github.com>
Co-authored-by: Jacob Levy <129657918+jlevypaloalto@users.noreply.github.com>
Co-authored-by: tkatzir <tkatzir@paloaltonetworks.com>
Co-authored-by: David Binyamin <47333909+davidbinyamin@users.noreply.github.com>
Co-authored-by: michal-dagan <109464765+michal-dagan@users.noreply.github.com>
Co-authored-by: EyalPintzov <91007713+eyalpalo@users.noreply.github.com>
Co-authored-by: Menachem Weinfeld <90556466+mmhw@users.noreply.github.com>
Co-authored-by: Binat Ziser <89336697+bziser@users.noreply.github.com>
Co-authored-by: Shmuel Kroizer <69422117+shmuel44@users.noreply.github.com>
Co-authored-by: dorschw <81086590+dorschw@users.noreply.github.com>
Co-authored-by: gbouzar <113393855+gbouzar@users.noreply.github.com>
  • Loading branch information
1 parent 9656b61 commit db16ec4
Show file tree
Hide file tree
Showing 6 changed files with 99 additions and 121 deletions.
99 changes: 58 additions & 41 deletions Packs/CortexXDR/Integrations/CortexXDRIR/CortexXDRIR.py
Expand Up @@ -17,7 +17,7 @@
INTEGRATION_CONTEXT_BRAND = 'PaloAltoNetworksXDR'
XDR_INCIDENT_TYPE_NAME = 'Cortex XDR Incident Schema'
INTEGRATION_NAME = 'Cortex XDR - IR'
ALERTS_LIMIT_PER_INCIDENTS = -1
ALERTS_LIMIT_PER_INCIDENTS: int = -1
FIELDS_TO_EXCLUDE = [
'network_artifacts',
'file_artifacts'
Expand Down Expand Up @@ -522,6 +522,35 @@ def get_last_mirrored_in_time(args):
return last_mirrored_in_timestamp


def sort_incident_data(raw_incident):
"""
Sorts and processes the raw incident data into a cleaned incident dict.
Parameters:
- raw_incident (dict): The raw incident data as provided by the API.
Returns:
- dict: A dictionary containing the processed incident data with:
- organized alerts.
- file artifact
- network artifacts.
"""
incident = raw_incident.get('incident', {})
raw_alerts = raw_incident.get('alerts', {}).get('data', None)
file_artifacts = raw_incident.get('file_artifacts', {}).get('data')
network_artifacts = raw_incident.get('network_artifacts', {}).get('data')
context_alerts = clear_trailing_whitespace(raw_alerts)
if context_alerts:
for alert in context_alerts:
alert['host_ip_list'] = alert.get('host_ip').split(',') if alert.get('host_ip') else []
incident.update({
'alerts': context_alerts,
'file_artifacts': file_artifacts,
'network_artifacts': network_artifacts
})
return incident


def get_incident_extra_data_command(client, args):
global ALERTS_LIMIT_PER_INCIDENTS
incident_id = args.get('incident_id')
Expand All @@ -542,29 +571,20 @@ def get_incident_extra_data_command(client, args):
if isinstance(raw_incident, list):
raw_incident = raw_incident[0]
if raw_incident.get('incident', {}).get('alert_count') > ALERTS_LIMIT_PER_INCIDENTS:
demisto.debug(f'for incident:{incident_id} using the old call since "\
"alert_count:{raw_incident.get("incident", {}).get("alert_count")} >" \
"limit:{ALERTS_LIMIT_PER_INCIDENTS}')
raw_incident = client.get_incident_extra_data(incident_id, alerts_limit)
incident = raw_incident.get('incident', {})
incident_id = incident.get('incident_id')
raw_alerts = raw_incident.get('alerts', {}).get('data', None)
readable_output = [tableToMarkdown(f'Incident {incident_id}', incident, removeNull=True)]
file_artifacts = raw_incident.get('file_artifacts', {}).get('data')
network_artifacts = raw_incident.get('network_artifacts', {}).get('data')
context_alerts = clear_trailing_whitespace(raw_alerts)
if context_alerts:
for alert in context_alerts:
alert['host_ip_list'] = alert.get('host_ip').split(',') if alert.get('host_ip') else []
if len(context_alerts) > 0:
readable_output.append(tableToMarkdown('Alerts', context_alerts,
headers=[key for key in context_alerts[0]
if key != 'host_ip'], removeNull=True))
readable_output.append(tableToMarkdown('Network Artifacts', network_artifacts, removeNull=True))
readable_output.append(tableToMarkdown('File Artifacts', file_artifacts, removeNull=True))
demisto.debug(f"in get_incident_extra_data_command {incident_id=} {raw_incident=}")
readable_output = [tableToMarkdown(f'Incident {incident_id}', raw_incident.get('incident'), removeNull=True)]
incident = sort_incident_data(raw_incident)
if incident_alerts := incident.get('alerts'):
readable_output.append(tableToMarkdown('Alerts', incident_alerts,
headers=[key for key in incident_alerts[0]
if key != 'host_ip'], removeNull=True))
readable_output.append(tableToMarkdown('Network Artifacts', incident.get('network_artifacts'), removeNull=True))
readable_output.append(tableToMarkdown('File Artifacts', incident.get('file_artifacts'), removeNull=True))

incident.update({
'alerts': context_alerts,
'file_artifacts': file_artifacts,
'network_artifacts': network_artifacts
})
account_context_output = assign_params(
Username=incident.get('users', '')
)
Expand All @@ -578,7 +598,6 @@ def get_incident_extra_data_command(client, args):
alert_context['ID'] = endpoint_id
if alert_context:
endpoint_context_output.append(alert_context)

context_output = {f'{INTEGRATION_CONTEXT_BRAND}.Incident(val.incident_id==obj.incident_id)': incident}
if account_context_output:
context_output['Account(val.Username==obj.Username)'] = account_context_output
Expand Down Expand Up @@ -695,7 +714,6 @@ def sort_all_list_incident_fields(incident_data):

if incident_data.get('incident_sources', []):
incident_data['incident_sources'] = sorted(incident_data.get('incident_sources', []))

format_sublists = not argToBoolean(demisto.params().get('dont_format_sublists', False))
if incident_data.get('alerts', []):
incident_data['alerts'] = sort_by_key(incident_data.get('alerts', []), main_key='alert_id', fallback_key='name')
Expand Down Expand Up @@ -972,17 +990,19 @@ def fetch_incidents(client, first_fetch_time, integration_instance, last_run: di
incidents = []
if incidents_from_previous_run:
raw_incidents = incidents_from_previous_run
ALERTS_LIMIT_PER_INCIDENTS = last_run.get('alerts_limit_per_incident', -1) if isinstance(last_run, dict) else -1
else:
if statuses:
raw_incidents = []
for status in statuses:
raw_incidents.append(client.get_multiple_incidents_extra_data(
gte_creation_time_milliseconds=last_fetch,
status=status,
limit=max_fetch, starred=starred,
starred_incidents_fetch_window=starred_incidents_fetch_window,
fields_to_exclude=fields_to_exclude))
raw_incidents = sorted(raw_incidents, key=lambda inc: inc['incident']['creation_time'])
raw_incident_status = client.get_multiple_incidents_extra_data(
gte_creation_time_milliseconds=last_fetch,
status=status,
limit=max_fetch, starred=starred,
starred_incidents_fetch_window=starred_incidents_fetch_window,
fields_to_exclude=fields_to_exclude)
raw_incidents.extend(raw_incident_status)
raw_incidents = sorted(raw_incidents, key=lambda inc: inc.get('incident', {}).get('creation_time'))
else:
raw_incidents = client.get_multiple_incidents_extra_data(
gte_creation_time_milliseconds=last_fetch, limit=max_fetch,
Expand All @@ -1000,16 +1020,15 @@ def fetch_incidents(client, first_fetch_time, integration_instance, last_run: di
count_incidents = 0

for raw_incident in raw_incidents:
incident_data: dict[str, Any] = raw_incident.get('incident') or raw_incident
incident_data: dict[str, Any] = sort_incident_data(raw_incident) if raw_incident.get('incident') else raw_incident
incident_id = incident_data.get('incident_id')
alert_count = arg_to_number(incident_data.get('alert_count')) or 0
if alert_count > ALERTS_LIMIT_PER_INCIDENTS:
incident_data = client.get_incident_extra_data(client, {"incident_id": incident_id,
"alerts_limit": 1000})[0].get('incident')\
or {}

demisto.debug(f'for incident:{incident_id} using the old call since alert_count:{alert_count} >" \
"limit:{ALERTS_LIMIT_PER_INCIDENTS}')
raw_incident_ = client.get_incident_extra_data(incident_id=incident_id)
incident_data = sort_incident_data(raw_incident_)
sort_all_list_incident_fields(incident_data)

incident_data['mirror_direction'] = MIRROR_DIRECTION.get(demisto.params().get('mirror_direction', 'None'),
None)
incident_data['mirror_instance'] = integration_instance
Expand All @@ -1021,14 +1040,11 @@ def fetch_incidents(client, first_fetch_time, integration_instance, last_run: di
'occurred': occurred,
'rawJSON': json.dumps(incident_data),
}

if demisto.params().get('sync_owners') and incident_data.get('assigned_user_mail'):
incident['owner'] = demisto.findUser(email=incident_data.get('assigned_user_mail')).get('username')

# Update last run and add incident if the incident is newer than last fetch
if incident_data['creation_time'] > last_fetch:
if incident_data.get('creation_time', 0) > last_fetch:
last_fetch = incident_data['creation_time']

incidents.append(incident)
non_created_incidents.remove(raw_incident)

Expand All @@ -1045,6 +1061,7 @@ def fetch_incidents(client, first_fetch_time, integration_instance, last_run: di

if non_created_incidents:
next_run['incidents_from_previous_run'] = non_created_incidents
next_run['alerts_limit_per_incident'] = ALERTS_LIMIT_PER_INCIDENTS # type: ignore[assignment]
else:
next_run['incidents_from_previous_run'] = []

Expand Down
27 changes: 24 additions & 3 deletions Packs/CortexXDR/Integrations/CortexXDRIR/CortexXDRIR_test.py
Expand Up @@ -94,8 +94,8 @@ def test_fetch_incidents_filtered_by_status(requests_mock, mocker):
client = Client(
base_url=f'{XDR_URL}/public_api/v1', verify=False, timeout=120, proxy=False)
incident_extra_data_under_investigation = load_test_data('./test_data/get_incident_extra_data_host_id_array.json')\
.get('reply', {}).get('incidents')[0]
incident_extra_data_new = load_test_data('./test_data/get_incident_extra_data_new_status.json').get('reply')
.get('reply', {}).get('incidents')
incident_extra_data_new = load_test_data('./test_data/get_incident_extra_data_new_status.json').get('reply').get('incidents')
mocker.patch.object(Client, 'get_multiple_incidents_extra_data', side_effect=[incident_extra_data_under_investigation,
incident_extra_data_new])
mocker.patch("CortexXDRIR.ALERTS_LIMIT_PER_INCIDENTS", new=50)
Expand Down Expand Up @@ -724,7 +724,7 @@ def test_fetch_incidents_extra_data(requests_mock, mocker):
"""
from CortexXDRIR import fetch_incidents, Client
raw_multiple_extra_data = load_test_data('./test_data/get_multiple_incidents_extra_data.json')
raw_all_alerts_incident_2 = load_test_data('./test_data/get_extra_data_all_alerts.json').get('reply', {}).get('incidents', [])
raw_all_alerts_incident_2 = load_test_data('./test_data/get_extra_data_all_alerts.json').get('reply', {})

client = Client(
base_url=f'{XDR_URL}/public_api/v1', verify=False, timeout=10, proxy=False)
Expand Down Expand Up @@ -1267,3 +1267,24 @@ def test_get_incident_extra_data_incident_not_exist(mocker):
with pytest.raises(DemistoException) as e:
_, outputs, raw_incident = get_incident_extra_data_command(client, args)
assert str(e.value) == 'Incident 1 is not found'


def test_sort_all_incident_data_fields_fetch_case_get_multiple_incidents_extra_data_format(mocker):
"""
Given:
- raw incident in get_incident_extra_data format- alerts and artifacts not in
incident data information
When
- Running sort_all_list_incident_fields
Then
- Verify that alerts and artifacts are found.
"""
from CortexXDRIR import sort_incident_data, sort_all_list_incident_fields
incident_case_get_multiple_incidents_extra_data = load_test_data('./test_data/get_multiple_incidents_extra_data.json')\
.get('reply').get('incidents')[0]
incident_data = sort_incident_data(incident_case_get_multiple_incidents_extra_data)
sort_all_list_incident_fields(incident_data)
assert incident_data.get('alerts')
assert incident_data.get('incident_sources') == ['XDR Agent']
assert incident_data.get('status') == 'new'
assert len(incident_data.get('file_artifacts')) == 1
@@ -1,10 +1,6 @@
{
"reply": {
"total_count": 2,
"result_count": 2,
"alerts_limit_per_incident": 2,
"incidents": [
{

"incident": {
"incident_id": "2",
"incident_name": null,
Expand Down Expand Up @@ -589,76 +585,7 @@
}
]
}
}
],
"alerts": {
"total_count": 1,
"data": [
{
"alert_id": "1",
"detection_timestamp": 1575806904222,
"source": "XDR Agent",
"severity": "medium",
"name": "Local Analysis Malware",
"category": "Malware",
"action": "BLOCKED",
"action_pretty": "Prevented (Blocked)",
"description": "Suspicious executable detected",
"host_ip": "1.1.1.1.",
"host_name": "AAAAAA",
"user_name": "Administrator",
"event_type": "Process Execution",
"actor_process_image_name": "wildfire-test-pe-file.exe",
"actor_process_command_line": "address",
"actor_process_signature_status": "N/A",
"actor_process_signature_vendor": "N/A",
"causality_actor_process_image_name": null,
"causality_actor_process_command_line": null,
"causality_actor_process_signature_status": "N/A",
"causality_actor_process_signature_vendor": "N/A",
"causality_actor_causality_id": null,
"action_process_image_name": null,
"action_process_image_command_line": null,
"action_process_image_sha256": null,
"action_process_signature_status": "N/A",
"action_process_signature_vendor": "N/A",
"action_file_path": null,
"action_file_md5": null,
"action_file_sha256": null,
"action_registry_data": null,
"action_registry_full_key": null,
"action_local_ip": null,
"action_local_port": null,
"action_remote_ip": null,
"action_remote_port": null,
"action_external_hostname": null,
"fw_app_id": null,
"is_whitelisted": "No",
"starred": false
}
]
}
},


"network_artifacts": {
"total_count": 0,
"data": []
},
"file_artifacts": {
"total_count": 1,
"data": [
{
"type": "HASH",
"alert_count": 1,
"is_manual": false,
"is_malicious": false,
"is_process": true,
"file_name": "file_name_2.exe",
"file_sha256": "file_sha256_2",
"file_signature_status": "SIGNATURE_UNAVAILABLE",
"file_signature_vendor_name": null,
"file_wildfire_verdict": "UNKNOWN"
}
]
}
}
}
@@ -1,5 +1,8 @@
{
"reply": {
"incidents":
[
{
"incident": {
"incident_id": "2",
"creation_time": 1575806909185,
Expand Down Expand Up @@ -92,4 +95,6 @@
]
}
}
]
}
}
8 changes: 8 additions & 0 deletions Packs/CortexXDR/ReleaseNotes/6_1_27.md
@@ -0,0 +1,8 @@

#### Integrations

##### Palo Alto Networks Cortex XDR - Investigation and Response

- Fixed an issue where ***alerts***, ***network_artifacts***, ***file_artifacts*** is not being populated on new incidents when bringing them into XSOAR.
- Fixed an issue where the alert count was reset after docker timeout.
- Fixed a bug where the response from the API was not parsed correctly.
2 changes: 1 addition & 1 deletion Packs/CortexXDR/pack_metadata.json
Expand Up @@ -2,7 +2,7 @@
"name": "Cortex XDR by Palo Alto Networks",
"description": "Automates Cortex XDR incident response, and includes custom Cortex XDR incident views and layouts to aid analyst investigations.",
"support": "xsoar",
"currentVersion": "6.1.26",
"currentVersion": "6.1.27",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
Expand Down

0 comments on commit db16ec4

Please sign in to comment.