Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[tenable_io] Initial Release for the Tenable.io #4816

Merged
merged 4 commits into from
Jan 27, 2023

Conversation

vinit-chauhan
Copy link
Contributor

What does this PR do?

  • Generated the skeleton of the Tenable.io integration package.
  • Added data streams.
  • Added data collection logic for all the data streams.
  • Added the ingest pipeline for all the data streams.
  • Mapped fields according to the ECS schema and added Fields metadata in the appropriate yml files.
  • Added dashboards and visualizations.
  • Added test for pipeline for all the data streams.
  • Added system test cases for all the data streams.

Integration release checklist

This checklist is intended for integrations maintainers to ensure consistency
when creating or updating a Package, Module or Dataset for an Integration.

All changes

  • Change follows the contributing guidelines
  • Supported versions of the monitoring target is documented
  • Supported operating systems are documented (if applicable)
  • Integration or System tests exist
  • Documentation exists
  • Fields follow ECS and naming conventions
  • At least a manual test with ES / Kibana / Agent has been performed.
  • Required Kibana version set to: ^8.6.0

New Package

  • Screenshot of the "Add Integration" page on Fleet added

Dashboards changes

  • Dashboards exists
  • Screenshots added or updated
  • Datastream filters added to visualizations

Log dataset changes

  • Pipeline tests exist (if applicable)
  • Generated output for at least 1 log file exists
  • Sample event (sample_event.json) exists

How to test this PR locally

  • Clone integrations repo.
  • Install elastic package locally.
  • Start elastic stack using elastic-package.
  • Move to integrations/packages/tenable_io directory.
  • Run the following command to run tests.

elastic-package test

Related issues

Screenshots

image
image
image
image
image
image

@elasticmachine
Copy link

Pinging @elastic/security-external-integrations (Team:Security-External Integrations)

@elasticmachine
Copy link

elasticmachine commented Dec 13, 2022

💚 Build Succeeded

the below badges are clickable and redirect to their specific view in the CI or DOCS
Pipeline View Test View Changes Artifacts preview preview

Expand to view the summary

Build stats

  • Start Time: 2023-01-03T12:06:05.588+0000

  • Duration: 14 min 20 sec

Test stats 🧪

Test Results
Failed 0
Passed 18
Skipped 0
Total 18

🤖 GitHub comments

Expand to view the GitHub comments

To re-run your PR in the CI, just comment with:

  • /test : Re-trigger the build.

@elasticmachine
Copy link

elasticmachine commented Dec 13, 2022

🌐 Coverage report

Name Metrics % (covered/total) Diff
Packages 100.0% (3/3) 💚
Files 100.0% (3/3) 💚
Classes 100.0% (3/3) 💚
Methods 95.455% (42/44) 👎 -1.212
Lines 91.798% (2104/2292) 👎 -8.202
Conditionals 100.0% (0/0) 💚

@jamiehynds
Copy link

@vinit-elastic can we ensure the integration is shipping as Beta, not Tech Preview (as per the screenshots).

@vinit-chauhan
Copy link
Contributor Author

Hey @jamiehynds, We have taken a screenshot of Tenable.io integration in stack version 8.6.0-SNAPSHOT, so that's why the Technical Preview tag may be there. Also, we have checked with the latest elastic stack version, 8.5.3, in which we are getting the beta tag. Here I am sharing the screenshot.

8 5 3

return false;
}
dropEmptyFields(ctx);
- append:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one is a nice usage of the new kind type 👍

- append:
field: error.message
value: '{{{ _ingest.on_failure_message }}}'
- append:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need it in multiple places?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have added multiple on_failures on the potential failure points in the pipeline such as convert, date processors, etc.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vinit-elastic what Marius meant was event.kind was already set to pipeline_error earlier in the pipeline.
Do we need to again append inside on_failure?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we have handled the scenario where the pipeline goes to fail due to an uncertain situation in processors(rename, script, etc.). So this situation will skip the first pipeline_error and the control goes to on_failure at last and the pipeline error will append as expected.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense! But how about the other way around? I mean is error.message populated even without any failures in the pipeline? If not, then we could remove the earlier processor and just let the on_failure set event.kind to pipeline_error.

Copy link
Contributor Author

@vinit-chauhan vinit-chauhan Jan 4, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The answer to your question is, no. We are not setting the error message in any other case than a pipeline error in this integration.
However, we have added on_failures in all the convert processors. And if that failure would be caught by the subsequent on_failure and not the on_failure on the root. Moreover, we do not want to add unnecessary processing by setting event.kind in all the on_failures. Hence, we have added one on the bottom to check if there are any error messages present and if so, append the event.kind with pipeline_error as the on_failure on the root won't run if the errors are caught by the on_failure associated with the processors.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, makes sense! Thanks!

description: Collect asset logs from Tenable.io.
template_path: httpjson.yml.hbs
vars:
- name: interval
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one seems to be missing proxy?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have added a proxy at the package level. Here's the like to that line:

Comment on lines +37 to +86
- name: access_key
type: password
title: Access Key
description: Access key for the Tenable.io API.
multi: false
required: true
show_user: true
- name: secret_key
type: password
title: Secret Key
description: Secret key for the Tenable.io API.
multi: false
required: true
show_user: true
- name: proxy_url
type: text
title: Proxy URL
multi: false
required: false
show_user: false
description: URL to proxy connections in the form of http[s]://<user>:<password>@<server name/ip>:<port>. Please ensure your username and password are in URL encoded format.
- name: ssl
type: yaml
title: SSL Configuration
description: i.e. certificate_authorities, supported_protocols, verification_mode etc.
multi: false
required: false
show_user: false
default: |
#certificate_authorities:
# - |
# -----BEGIN CERTIFICATE-----
# MIIDCjCCAfKgAwIBAgITJ706Mu2wJlKckpIvkWxEHvEyijANBgkqhkiG9w0BAQsF
# ADAUMRIwEAYDVQQDDAlsb2NhbGhvc3QwIBcNMTkwNzIyMTkyOTA0WhgPMjExOTA2
# MjgxOTI5MDRaMBQxEjAQBgNVBAMMCWxvY2FsaG9zdDCCASIwDQYJKoZIhvcNAQEB
# BQADggEPADCCAQoCggEBANce58Y/JykI58iyOXpxGfw0/gMvF0hUQAcUrSMxEO6n
# fZRA49b4OV4SwWmA3395uL2eB2NB8y8qdQ9muXUdPBWE4l9rMZ6gmfu90N5B5uEl
# 94NcfBfYOKi1fJQ9i7WKhTjlRkMCgBkWPkUokvBZFRt8RtF7zI77BSEorHGQCk9t
# /D7BS0GJyfVEhftbWcFEAG3VRcoMhF7kUzYwp+qESoriFRYLeDWv68ZOvG7eoWnP
# PsvZStEVEimjvK5NSESEQa9xWyJOmlOKXhkdymtcUd/nXnx6UTCFgnkgzSdTWV41
# CI6B6aJ9svCTI2QuoIq2HxX/ix7OvW1huVmcyHVxyUECAwEAAaNTMFEwHQYDVR0O
# BBYEFPwN1OceFGm9v6ux8G+DZ3TUDYxqMB8GA1UdIwQYMBaAFPwN1OceFGm9v6ux
# 8G+DZ3TUDYxqMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAG5D
# 874A4YI7YUwOVsVAdbWtgp1d0zKcPRR+r2OdSbTAV5/gcS3jgBJ3i1BN34JuDVFw
# 3DeJSYT3nxy2Y56lLnxDeF8CUTUtVQx3CuGkRg1ouGAHpO/6OqOhwLLorEmxi7tA
# H2O8mtT0poX5AnOAhzVy7QW0D/k4WaoLyckM5hUa6RtvgvLxOwA0U+VGurCDoctu
# 8F4QOgTAWyh8EZIwaKCliFRSynDpv3JTUwtfZkxo6K6nce1RhCWFAsMvDZL8Dgc0
# yvgJ38BRsFOtkRuAGSf6ZUwTO8JJRRIFnpUzXflAnGivK9M13D5GEQMmIl6U9Pvk
# sxSmbIUfc2SGJGCJD4I=
# -----END CERTIFICATE-----
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be a slight nitpick, but there has been several occasions in which users are required to have different setups per type of datastream, so I am a bit hesitant to leave these at the top of the integration.
However we can leave them there for now, and modify it if necessary.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @P1llus, The authorization parameter has been added at the package level. If we shift the SSL and proxy parameters at the data stream level, then we must shift the authorization parameter at the same level. Additionally, we are following the same pattern as all the previous connectors. However, If you think changes are required, we can change it.

- append:
field: error.message
value: '{{{ _ingest.on_failure_message }}}'
- append:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vinit-elastic what Marius meant was event.kind was already set to pipeline_error earlier in the pipeline.
Do we need to again append inside on_failure?

@infosecwatchman
Copy link

I know this is still in beta, but is there a way to add this integration myself to my Elastic Stack, without screwing up the integration when it comes out via the normal release?

@jamiehynds
Copy link

@infosecwatchman glad to hear you're interested in using this integration. We're just wrapping up some final reviews, so it will be easier to wait until it's publicly available.

@vinit-elastic @kcreddy are there any loose ends on the PR review or are we ready to merge?

@infosecwatchman
Copy link

Great! Do I have to upgrade my stack once this is merged?

@jamiehynds
Copy link

Great! Do I have to upgrade my stack once this is merged?

Possibly. The integration will require v8.4 or later.

Copy link
Contributor

@kcreddy kcreddy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👍🏼

@kcreddy
Copy link
Contributor

kcreddy commented Jan 27, 2023

@infosecwatchman glad to hear you're interested in using this integration. We're just wrapping up some final reviews, so it will be easier to wait until it's publicly available.

@vinit-elastic @kcreddy are there any loose ends on the PR review or are we ready to merge?

I think I was just waiting for @P1llus review to be resolved as well. Looks like we are good. I will merge it now.

@kcreddy kcreddy merged commit 90bb74f into elastic:main Jan 27, 2023
@elasticmachine
Copy link

Package tenable_io - 0.1.0 containing this change is available at https://epr.elastic.co/search?package=tenable_io

@infosecwatchman
Copy link

How long does it take for new packages to be loaded into the repository?

@jamiehynds
Copy link

Hey @infosecwatchman - the integration is now available, however it requires the latest version of the stack, 8.6.

Screenshot 2023-01-27 at 11 38

@infosecwatchman
Copy link

What's a good way to go about troubleshooting an integration. For some reason, I have not been recollecting data on the regular intervals as defined. The tenable asset dataset has not been updated since it was created (ie. after the initial data capture).
Screenshot 2023-02-09 101953
Screenshot 2023-02-09 101826

@dhruvil-elastic
Copy link

dhruvil-elastic commented Feb 13, 2023

What's a good way to go about troubleshooting an integration. For some reason, I have not been recollecting data on the regular intervals as defined. The tenable asset dataset has not been updated since it was created (ie. after the initial data capture). Screenshot 2023-02-09 101953 Screenshot 2023-02-09 101826

Hello @infosecwatchman,

Integration is designed in a way that for the first call, all the asset events will be ingested, and then onwards only those assets events will be ingested whose 'updated_at' field is updated or new assets are added.
Debug logs will be helpful in confirming if the integration is working as intended or not.

Moreover, we are working on an enhancement where a user faced an issue with a very large export job. Can you please confirm if it is the same case as yours or not? If not would you mind raising a new issue in GitHub?

Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request New Integration
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Tenable.io
7 participants