Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Juniper SRX] Support SRX system logs pattern #6183

Merged
merged 21 commits into from
Jun 21, 2023

Conversation

kcreddy
Copy link
Contributor

@kcreddy kcreddy commented May 12, 2023

What does this PR do?

Current Juniper SRX package doesn't support system logs. This PR adds support for SRX system logs. Samples requested from customer and anonymized.

Checklist

  • I have reviewed tips for building integrations and this pull request is aligned with them.
  • I have verified that all data streams collect metrics or logs.
  • I have added an entry to my package's changelog.yml file.
  • I have verified that Kibana version constraints are current according to guidelines.

How to test this PR locally

Related issues

Screenshots

@kcreddy kcreddy self-assigned this May 12, 2023
@elasticmachine
Copy link

elasticmachine commented May 12, 2023

💚 Build Succeeded

the below badges are clickable and redirect to their specific view in the CI or DOCS
Pipeline View Test View Changes Artifacts preview preview

Expand to view the summary

Build stats

  • Start Time: 2023-06-20T16:20:57.735+0000

  • Duration: 16 min 8 sec

Test stats 🧪

Test Results
Failed 0
Passed 14
Skipped 0
Total 14

🤖 GitHub comments

Expand to view the GitHub comments

To re-run your PR in the CI, just comment with:

  • /test : Re-trigger the build.

@elasticmachine
Copy link

elasticmachine commented May 12, 2023

🌐 Coverage report

Name Metrics % (covered/total) Diff
Packages 100.0% (1/1) 💚
Files 100.0% (8/8) 💚 13.636
Classes 100.0% (8/8) 💚 13.636
Methods 100.0% (59/59) 💚 16.667
Lines 65.674% (1831/2788) 👎 -19.261
Conditionals 100.0% (0/0) 💚

@kcreddy kcreddy changed the title [Juniper SRX] Add System logs pattern [Juniper SRX] Support SRX system logs pattern May 12, 2023
@kcreddy kcreddy marked this pull request as ready for review May 17, 2023 07:40
@kcreddy kcreddy requested a review from a team as a code owner May 17, 2023 07:40
@elasticmachine
Copy link

Pinging @elastic/security-external-integrations (Team:Security-External Integrations)

Comment on lines 350 to 351
- _temp_.to_be_parsed
- _temp_
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You would only need _temp_ here, as it removes the whole object

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated as suggested

type: keyword
- name: argument1
type: keyword
- name: index1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are quite a few new fields here, many which seems related to ECS. If these fields are converted or moved to ECS, they should not require its own mapping. Do we know if any of these might be obsolete?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed few fields which could have been ECS. Added processors to extract

- juniper.srx.firewall
ignore_missing: true

on_failure:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we are tagging so many of the processors, let's ensure that we also set the proper event.kind. Also if we are append to error.message, then this also needs to be append instead of set

Could you add these lines to both this ingest pipeline, but also at the end of the others?

on_failure:
- appendt:
    field: error.message
    value: |-
        Processor "{{ _ingest.on_failure_processor_type }}" with tag "{{ _ingest.on_failure_processor_tag }}" in pipeline "{{ _ingest.on_failure_pipeline }}" failed with message "{{ _ingest.on_failure_message }}"
- set:
    field: event.kind
    value: pipeline_error

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated as suggested

@P1llus
Copy link
Member

P1llus commented May 17, 2023

After talking a bit more with Krishna, there are a few concerns with the testdata:

  1. It does not seem they all follow the same format. The last 2 lines in the testdata uses the structured-data configuration, which is what we prefer. We need to confirm if the other messages are indeed also in this format or not.
  2. If the above turns out that the sample data is in multiple different formats, we should restrict the support to structured-data only, and update the samples provided to that. This is because the log format outside of this format is too inconsistent, and we would need to constantly update the grok patterns for every combination of log messages, which is a lot wider than the current samples, and would not be possible to maintain over time.
  3. If the supplied samples is already in structured-data format, or it is not possible to configure it this way for some reason which I think is possible, then worst case the logic should be moved to the system pipeline itself, so that not all the other data would have to hit the same heavy grok or possible dissect processors in the future.

@kcreddy
Copy link
Contributor Author

kcreddy commented May 18, 2023

Moved the logic to system pipeline and added dissect patterns for few sample messages

@andrewkroh andrewkroh requested a review from a team May 18, 2023 19:28
@elastic elastic deleted a comment from cai-elastic May 22, 2023
@P1llus
Copy link
Member

P1llus commented May 22, 2023

The PR itself looks good, though there are still some concerns around the data format inconsistencies, and unless we manage to resolve all those questions we should wait with merging.

Some discussions on Slack led us to currently plan on making a custom build of the integration that certain key users can test with, together with finalizing the open questions around the format should result in a new datastream being merged, so let's await a bit further first.

@narph
Copy link
Contributor

narph commented May 25, 2023

@kcreddy , @P1llus should we close this PR for now?

@P1llus
Copy link
Member

P1llus commented May 25, 2023

@kcreddy , @P1llus should we close this PR for now?

We are just waiting to merge it once the tests have been done for now.

@P1llus
Copy link
Member

P1llus commented May 25, 2023

@kcreddy , @P1llus should we close this PR for now?

We are just waiting with the merge until the tests have been performed. We can either close it and create a new one later, or leave it open, both works for me!

@kcreddy
Copy link
Contributor Author

kcreddy commented May 29, 2023

We are just waiting with the merge until the tests have been performed. We can either close it and create a new one later, or leave it open, both works for me!

Added new samples to the tests. No grok failures, but most of the logs need new grok/dissect patterns to extract.

@kcreddy kcreddy merged commit 5248499 into elastic:main Jun 21, 2023
4 checks passed
@elasticmachine
Copy link

Package juniper_srx - 1.13.0 containing this change is available at https://epr.elastic.co/search?package=juniper_srx

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Juniper SRX] Support additional message patterns
4 participants