You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From past experience with recog, using the API against a large dataset gets slow, mostly because all you can do is linear scan all of the signature regexes against every one of your matching items. Each new signature adds linear time to the run.
This might break the more advanced matchers, but if there were a way to do boyer-moore matching or something similar to compile regexes into a single matcher that doesn't have to start over and run multiple passes, like an IDS or DPI device would do, it would probably speed things up a good amount.
From past experience with recog, using the API against a large dataset gets slow, mostly because all you can do is linear scan all of the signature regexes against every one of your matching items. Each new signature adds linear time to the run.
This might break the more advanced matchers, but if there were a way to do boyer-moore matching or something similar to compile regexes into a single matcher that doesn't have to start over and run multiple passes, like an IDS or DPI device would do, it would probably speed things up a good amount.
https://github.com/rapid7/recog/blob/master/lib/recog/matcher.rb#L26
The text was updated successfully, but these errors were encountered: