-
Notifications
You must be signed in to change notification settings - Fork 999
Support custom workflow benchmark #1898
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: artikell <739609084@qq.com>
2ccd811 to
bf4f2b6
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## unstable #1898 +/- ##
============================================
- Coverage 71.06% 70.89% -0.17%
============================================
Files 123 123
Lines 65671 65760 +89
============================================
- Hits 46669 46623 -46
- Misses 19002 19137 +135
🚀 New features to boost your workflow:
|
|
Cool idea. It's very customizable, but maybe overkill? For key distribution patterns, it seems more practical to use a normal distribution and add a regular parameter to specify the desired variance or std deviation. If it's specified, then normal distribution is enabled, otherwise the legacy uniform distribution is used. Key sizes distribution and cluster slot distribution can use a similar distributions and parameters. We should think about performance too. Lua is quite slow. Valkey-benchmark already has a hard time generating traffic fast enough. It needs to use multiple threads to be able to keep valkey-server busy on a single thread. |
|
@zuiderkwast I chose Lua because Valkey itself relies on Lua. From a performance perspective, on the one hand, this is mainly due to the consumption of generating commands, which may not necessarily result in significant losses compared to network interactions (depending on the complexity of the workflow). On the other hand, it is not strongly dependent on Lua. In the future, you can refer to the module pattern or use At least compared to the benchmarks of workflow implemented in different languages (with even worse performance), it's better for us to unify them. |
|
Interesting idea! I have noticed that the benchmark tool currently lacks scalability, and using Lua is a great approach |
Signed-off-by: artikell <739609084@qq.com>
Signed-off-by: artikell <739609084@qq.com>
|
@arthurkiller , cool idea. https://github.com/asafpamzn/valkey-benchmark-node I think that both should work, and some users will prefer the above and others will prefer to use |
So is the main purpose that users can simulate their own application's workload to design their own benchmarks that are more realistic for their workload?
Yeah, I get it and I think it's a good idea. Currently I need 3-4 threads in valkey-benchmark to keep one singe-threaded valkey-server instance busy (run near 100% CPU) locally with pipelining enabled. You can always use more threads though. How much more CPU does it take to to generate the same load with Lua? If it's not more than twice as much, then I think it's still fine. You can always run valkey-benchmark with more threads. |
This PR implements a custom benchmark method. The ability to generate workload through Lua. Using Lua to generate requests will enable more capabilities:
It can be considered as a continuation of this issue.
The first step is to write a workload script(workflow.lua):
Execute through the following command:
Tip:
Todo: