-
Notifications
You must be signed in to change notification settings - Fork 82
Not distributing resource load equally with each replica #89
Comments
@srinathganesh1 this is probably not related to neo4j-helm, but has to do with the way you generate load, and what your client connection strategy is to Neo4j. What you describe makes it sound like whatever is generating the load is sending all queries to the Neo4j leader. If the load is all writes, that's going to happen no matter what you do - because only the leader can service writes. If it's a mixture of reads and writes, then the reads should be getting spread out to the other cluster members. In your client code or whatever's generating the load, pay particular attention to whether you're using "autocommit transactions" or explicit read/write transactions. Autocommit transactions will generally always go to the leader and will cause what you're describing. I recommend this article to understand what's happening: https://medium.com/neo4j/querying-neo4j-clusters-7d6fde75b5b4 |
Hi @moxious , I have updated the original post with a snipped of my code (its a python code that does many read queries in parallel). Next to rule out any issues with my code and I used another Scipt to load test, its written below Observations of
kubectl top pods
|
There's a conceptual problem here. The graph workload tool is fine, but when you use --query, the tool doesn't know if you're doing reads or writes. So it chooses to do a writeTransaction for you even if it's a read (it doesn't parse cypher). This in turn means all of your queries get routed to the leader. If you like, you can open an issue on that workload tool repo and I'll fix it when I can. You need the ability to pass a "mode" flag like this:
This would tell the tool to run read transactions, which would spread them out across your cluster and more evenly utilize CPU. Right now I think you're just beating up the leader. So you have a couple of options to have tight control over what you want:
|
I did try out a Python based test where READ/WRITE mode is set to queries Sample Code
and with this code too I am facing similar imbalance of loads too. I will try out the changes in your reply too |
@srinathganesh1 v0.5.1 of graph-workload is now available that has a --mode flag. If you do what you were doing but include --mode READ with the latest code, it should distribute reads across all of your followers. https://github.com/moxious/graph-workload/releases/tag/v0.5.1 I'm going to close this for now as I'm pretty sure this issue is unrelated to helm and kubernetes. But I really recommend you read this article I linked to understand what's happening & why: https://medium.com/neo4j/querying-neo4j-clusters-7d6fde75b5b4 Keep in mind - in a 3 node causal cluster, if you send thousands of reads to your cluster, they will typically be distributed amongst the 2 followers. If you send thousands of writes to your cluster, they'll all go to the leader. This means that if you truly want to balance the CPU of all 3 machines in the cluster, you need a mixed read/write workload, which you can generate by running the workload tool twice concurrently |
ok thank you |
I have a Causal Cluster of 3 instances, when I send a bunch of massive queries for load testing.
When I do
kubectl top pods
I can see the CPU is peaking for ONE instance, which rest of the instances are not taking much load. Later on one more Pod receives CPU load.in summary, each pod is not getting equal CPU load. Is this is known way? is there any way to config it?
Type of Queries: ALL READ QUEIRES
Code Snippet:
Then I invoke
do_read(query)
with multiple parallel connections (via Python Celery)Version: Latest Helm Version
The text was updated successfully, but these errors were encountered: