You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the Tenable.io integration is using the last_found parameter to only export vulnerabilities that were found since the time of the previous export request. Tenable seems to set the last_found value to the time that the scan finished, but after a scan finishes, the results still need to be processed. The vulnerabilities aren't included in any export requests until Tenable has finished publishing the results. This means that data can be missed if Tenable is still processing the results of a scan at the time of an export request.
For example, suppose you are exporting data every hour and a scan finishes at 11:55, so the vulnerabilities have a last_found value of 11:55. If Tenable takes 10 minutes to publish the results, then an export request at 12:00 won't include the vulnerabilities from that scan. But an export request at 13:00 also won't include the vulnerabilities, even though they were published at 12:10, because it is only looking for vulnerabilities with a last_found value after 12:00.
If instead the integration set the last_found parameter to a few minutes before the time of the previous request, it could get data about vulnerabilities that would otherwise be missed. In the example, setting the last_found parameter to 10 minutes before the previous request would mean that the 13:00 export request would be looking for vulnerabilities found after 11:50, which would include the results of the 11:55 scan. The ingest pipeline sets the document _id to a hash of event.original so this shouldn't create any duplicate documents.
The text was updated successfully, but these errors were encountered:
@LaZyDK Right now we are getting all of the data, but I think that's because something changed in the Tenable API. It seems to be returning data on all vulnerabilities, regardless of what time you set as the since parameter, so we keep getting data on the same vulnerabilities. That does fix the problem of missing data but it means we are storing multiple duplicate events for the same vulnerabilities. The pipeline you shared in #7671 would solve that problem.
Currently, the Tenable.io integration is using the
last_found
parameter to only export vulnerabilities that were found since the time of the previous export request. Tenable seems to set thelast_found
value to the time that the scan finished, but after a scan finishes, the results still need to be processed. The vulnerabilities aren't included in any export requests until Tenable has finished publishing the results. This means that data can be missed if Tenable is still processing the results of a scan at the time of an export request.For example, suppose you are exporting data every hour and a scan finishes at 11:55, so the vulnerabilities have a
last_found
value of 11:55. If Tenable takes 10 minutes to publish the results, then an export request at 12:00 won't include the vulnerabilities from that scan. But an export request at 13:00 also won't include the vulnerabilities, even though they were published at 12:10, because it is only looking for vulnerabilities with alast_found
value after 12:00.If instead the integration set the
last_found
parameter to a few minutes before the time of the previous request, it could get data about vulnerabilities that would otherwise be missed. In the example, setting thelast_found
parameter to 10 minutes before the previous request would mean that the 13:00 export request would be looking for vulnerabilities found after 11:50, which would include the results of the 11:55 scan. The ingest pipeline sets the document_id
to a hash ofevent.original
so this shouldn't create any duplicate documents.The text was updated successfully, but these errors were encountered: