-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance for Vulnerability Detection module in clustered environments #5313
Comments
Automatic
|
Minimum Activity and High activity performance tests fail due to no space left error. Reported in https://github.com/wazuh/wazuh-jenkins/issues/6475
Only Medium Activity performance tests finished successfully |
Medium Activity 🔴Build: https://ci.wazuh.info/job/CLUSTER-Workload_benchmarks_metrics/511/ Logs 🔴Summary
Master 🟡
Worker 1 🔴
Worker 2 🟡
Indexer 1 🟢No warnings or errors Indexer 2 🟢No warnings or errors Metrics 🔴Summary
Master 🟢Worker 1 🔴Worker 2 🔴Indexer 1 🟢No abnormal behavior detected Indexer 2 🟢No abnormal behavior detected Statistics 🟢Vulnerabilities State 🟢The vulnerability generator module, utilized by the simulate agents script, is designed to transmit 100 vulnerable packages to the manager and subsequently confirm their removal. This behavior is visualized through sinuous graphics, reaching a peak with each repetition after processing all vulnerabilities. In the plot, it's evident that the indexer connector fails to match the ideal expected graphics. However, it's apparent that the simulator is performing as intended. Implementing various testing methods to determine if the final number of vulnerabilities aligns with expectations at specific points during the test could be highly beneficial. Alerts 🟢We anticipate that the alerts generated by both the workers and the manager should correspond with the indexed alert values. Nonetheless, there appears to be a discrepancy: Due to the high activity levels, some variance between the written alerts and indexed alerts is expected. However, it would be advantageous to incorporate testing methods to gradually mitigate this, thereby stabilizing the environment over time. Evidence collection 🔴It has been detected the following errors regarding the evidence-collection capabilities of the pipeline
|
Following a discussion with @juliamagan, we've made the decision not to replicate the unsuccessful High Activity and Low Activity performance tests. Instead, these tests will be re-launched in RC2 |
GJ, but the graphs of the indexer 1 metrics cannot be displayed, perhaps because of an error in writing the comment. |
LGTM |
1 similar comment
LGTM |
Description
This issue is dedicated to conducting a thorough performance analysis of two proposed development approaches:
The objective is to perform performance tests and compare the results of both approaches. This comparative analysis will provide a comprehensive understanding of the potential impact on the product.
Test environment
Note
The load balancer is located on the master node.
23058 Development Packages
22867 Development Packages
Test Cases
Testing
Automatic
Methodology
Utilizing the CLUSTER-Workload_benchmarks_metrics pipeline to execute specified test cases automatically. Results will be manually analyzed and shared with the development team for validation adjustments.
Test Cases
Manual
Methodology
Customizing the set of vulnerable packages is not feasible in automatic testing. Therefore, manual testing will utilize a larger set of 10,000 vulnerabilities to identify any potential instability in environments with a high vulnerability count. The following Wazuh-QA tools will be employed for manual performance analysis:
Test Cases
Conclusion 🔴
New Issues
Known issues
Note
Manual performance testing, Minimum Activity and High Activity has not been performed. More information in #5313 (comment)
The text was updated successfully, but these errors were encountered: