Integration Name
Check Point [checkpoint]
Dataset Name
checkpoint.firewall
Integration Version
1.42.0
Agent Version
syslog
Agent Output Type
logstash
Elasticsearch Version
8.18.7
OS Version and Architecture
Linux, Kubernetes
Software/API Version
No response
Error Message
Ingestion pipeline failing at the stage {
"convert": {
"ignore_missing": false,
"field": "checkpoint.packets",
"type": "long"
}
} and timestamp following this processor hasn't been executed and we are getting error message at our logstash pipeline
:response=>{"create"=>{"status"=>400, "error"=>{"type"=>"document_parsing_exception", "reason"=>"[1:4086] failed to parse: data stream timestamp field [@timestamp] is missing", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"data stream timestamp field [@timestamp] is missing"}}}}}
Event Original
following is a anonymized sample logs
{
"original": "<134>1 2025-03-31T09:45:12Z FW-RANDOM CheckPoint 98765 - [action:"Drop"; flags:"409600"; ifdir:"inbound"; logid:"0"; loguid:"{0xa1b2c3d4,0xe5f67890,0x12345678,0xabcdef12}"; origin:"192.168.45.23"; originsicname:"CN=FWSC-RANDOM,O=TMUK_SDMZ_RANDOM.oam.example.com.abcd12"; sequencenum:"112"; time:"1743414312"; version:"5"; __policy_id_tag:"product=VPN-1 & FireWall-1[db_tag={F1234567-89AB-CDEF-0123-456789ABCDEF};mgmt=TMUK_SDMZ_RANDOM;date=1742477474;policy_name=Standard\]"; drop_reason:"matched optimized drop"; layer_name:"Network"; layer_uuid:"123e4567-e89b-12d3-a456-426614174000"; match_id:"8"; parent_rule:"0"; rule_action:"Drop"; rule_uid:"abcdef12-3456-7890-abcd-ef1234567890"; packet_amount:"5"; packets:" <203.0.113.45,54321,198.51.100.23,12345,6;bond1.3> <203.0.113.46,23456,198.51.100.24,34567,6;bond1.3> <203.0.113.47,45678,198.51.100.25,56789,6;bond1.3> <203.0.113.48,67890,198.51.100.26,78901,6;bond1.3> <203.0.113.49,89012,198.51.100.27,90123,6;bond1.3>"; product:"VPN-1 & FireWall-1"; proto:"6"]"
}
What did you do?
data flow : source -> syslog agent -> logstash -> kafka-> logstash -> elastic
as a workaround we just enable ignore failure on the particular processor
What did you see?
refer the screenshot
What did you expect to see?
no error.message
Anything else?
No response
Integration Name
Check Point [checkpoint]
Dataset Name
checkpoint.firewall
Integration Version
1.42.0
Agent Version
syslog
Agent Output Type
logstash
Elasticsearch Version
8.18.7
OS Version and Architecture
Linux, Kubernetes
Software/API Version
No response
Error Message
Ingestion pipeline failing at the stage {
"convert": {
"ignore_missing": false,
"field": "checkpoint.packets",
"type": "long"
}
} and timestamp following this processor hasn't been executed and we are getting error message at our logstash pipeline
:response=>{"create"=>{"status"=>400, "error"=>{"type"=>"document_parsing_exception", "reason"=>"[1:4086] failed to parse: data stream timestamp field [@timestamp] is missing", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"data stream timestamp field [@timestamp] is missing"}}}}}
Event Original
following is a anonymized sample logs
{
"original": "<134>1 2025-03-31T09:45:12Z FW-RANDOM CheckPoint 98765 - [action:"Drop"; flags:"409600"; ifdir:"inbound"; logid:"0"; loguid:"{0xa1b2c3d4,0xe5f67890,0x12345678,0xabcdef12}"; origin:"192.168.45.23"; originsicname:"CN=FWSC-RANDOM,O=TMUK_SDMZ_RANDOM.oam.example.com.abcd12"; sequencenum:"112"; time:"1743414312"; version:"5"; __policy_id_tag:"product=VPN-1 & FireWall-1[db_tag={F1234567-89AB-CDEF-0123-456789ABCDEF};mgmt=TMUK_SDMZ_RANDOM;date=1742477474;policy_name=Standard\]"; drop_reason:"matched optimized drop"; layer_name:"Network"; layer_uuid:"123e4567-e89b-12d3-a456-426614174000"; match_id:"8"; parent_rule:"0"; rule_action:"Drop"; rule_uid:"abcdef12-3456-7890-abcd-ef1234567890"; packet_amount:"5"; packets:" <203.0.113.45,54321,198.51.100.23,12345,6;bond1.3> <203.0.113.46,23456,198.51.100.24,34567,6;bond1.3> <203.0.113.47,45678,198.51.100.25,56789,6;bond1.3> <203.0.113.48,67890,198.51.100.26,78901,6;bond1.3> <203.0.113.49,89012,198.51.100.27,90123,6;bond1.3>"; product:"VPN-1 & FireWall-1"; proto:"6"]"
}
What did you do?
data flow : source -> syslog agent -> logstash -> kafka-> logstash -> elastic
as a workaround we just enable ignore failure on the particular processor
What did you see?
refer the screenshot
What did you expect to see?
no error.message
Anything else?
No response