-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Fortinet] Could not index event to Elasticsearch. [sentdelta] value out of range #11
Comments
Looking further, it seems that this shouldn't be a valid value any ways, so I think something went amiss when the log was shipped or the pipeline mangled it up. From what I can tell, there are no references to [sentdelta] in the pipeline. To be continued. |
Does |
I am trying to get it now. Unfortunately this won't log into Elastic with the error and since we have a lot of data in the pipeline it might be hard to isolate. I can try and see if I can find this data in the raw log that is shipped. I don't see anywhere in the pipeline where you would parse this field. I will post what I find. |
sentdelta is defined on the template It is not parsed specifically. But kv should handle it. The value 18446744073706429550 is around 18446.74 Petabytes. I have also received values of duration around 100+ years, so I think it is an issue with Fortigate itself. Probably FortiOS version |
It works everywhere else. It's just when it has that bugged value, it doesn't match the type long so it doesn't get indexed. Fortinet will need to fix. Otherwise I could add some basic log that if the deltas have 20+ digits then remove that field and tag that log with an error or add a new text field with the value, even though the value is useless. |
sentdelta and rcvddelta only happen on logid=20, which are excluded from all analysis, so you might go ahead and drop completely logid=20 anyway |
Closing this issue as this event should be dropped or mapped to a string type field depending on an organization's use cases. |
I have noticed that once in a while an event won't get ingested because the [sentdelta] field contains a value such as 18446744073706429550 which is larger than the type long required in Elastic. Is there a way to handle bigintegers in Elastic? Otherwise I might just make this a text field or changing the mapping settings to let it index anyways. How has anyone else managed this?
The text was updated successfully, but these errors were encountered: