New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Backports/704/v1 #10437
Backports/704/v1 #10437
Conversation
Issue: 6755 When NetmapOpen encounters an error opening the netmap device, it'll retry a bit. When the retry limit is reached, it'll shutdown Suricata. This commit ensures that the device list lock is not held when before closing all open devices before terminating Suricata. (cherry picked from commit 364adee)
Ensure that the mutex protecting the condition variable is held before signaling it. This ensures that the thread(s) awaiting the signal are notified. Issue: 6569 (cherry picked from commit 2a1a70b)
The runtime complexity of insertion sort is approx. O(h*n)^2 where h is the size of the HOME_NET and n is the number of ip only rules that use the HOME_NET. Replacing this with qsort significantly improves rule load time when a large HOME_NET is used in combination with a moderate amount of ip only rules. (cherry picked from commit 17f9d7a)
When an interface with dots is used, per worker stats are nested by the dot-separated-components of the interface due to the usage of OutputStats2Json(). Prevent this by using OutputStats2Json() on a per-thread specific object and setting this object into the threads object using the json_object_set_new() which won't do the dot expansion. This was tested by creating an interface with dots in the name and checking the stats. ip link add name a.b.c type dummy With Suricata 7.0.2, sniffing on the a.b.c interface results in the following worker stats format: "threads": { "W#01-a": { "b": { "c": { "capture": { "kernel_packets": 0, After this fix, the output looks as follows: "threads": { "W#01-a.b.c": { "capture": { "kernel_packets": 0, Ticket: OISF#6732 (cherry picked from commit b8b8aa6)
Main purpose is to validate that the 30 of bond0.30 isn't expanded into a nested object during serialization. (cherry picked from commit 08db0f3)
A dead lock could occur at start up, where a loader thread would get stuck on it's condition variable, while the main thread was polling the loaders task results. The vector to the dead lock is as follows: main loader DetectEngineMultiTenantSetup -DetectLoaderSetupLoadTenant --DetectLoaderQueueTask ---lock loader ---add task ---unlock loader lock loader check/exec tasks unlock loader ---wake up threads lock ctrl mutx cond wait ctrl unlock ctrl -DetectLoadersSync --lock loader --check tasks --unlock loader Between the main thread unlocking the loader and waking up the threads, it is possible that the loader has already moved ahead but not yet entered its conditional wait. The main thread sends its condition signal, but since the loader isn't yet waiting on it the signal is ignored. Then when the loader does enter its conditional wait, the signal is not sent again. This patch updates the logic to send signals much more often. It also makes sure that the signal is sent under lock, as the API requires. Bug: OISF#6768. Co-authored-by: Shivani Bhardwaj <shivani@oisf.net>
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main-7.0.x #10437 +/- ##
==============================================
+ Coverage 82.44% 82.47% +0.02%
==============================================
Files 975 976 +1
Lines 274925 275027 +102
==============================================
+ Hits 226672 226832 +160
+ Misses 48253 48195 -58
Flags with carried forward coverage won't be shown. Click here to find out more. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Staging looks OK.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All commits seem to be there 👌🏽
Information: ERROR: QA failed on SURI_TLPW2_autofp_suri_time. ERROR: QA failed on SURI_TLPR1_suri_time.
Pipeline 18530 |
Merged in #10442, thanks! |
https://redmine.openinfosecfoundation.org/issues/6768
https://redmine.openinfosecfoundation.org/issues/6756
https://redmine.openinfosecfoundation.org/issues/6746
https://redmine.openinfosecfoundation.org/issues/6784
https://redmine.openinfosecfoundation.org/issues/6764
https://redmine.openinfosecfoundation.org/issues/6785