New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vttablet is blowing up the memory usage during a sysbench (oomkill) #5511
Comments
Actually after adding the flag |
Understood, setting up the above parameter configures the underlying mysql with this value:
Now, I have an ever growing memory usage on MySQL. Need to figure out why. |
Forget about my previous message, these parameter are not related |
@Smana, Memory usage on MySQL is generally bound by the setting of |
When filing a bug, please include the following headings if
possible. Any example text in this template can be deleted.
Overview of the Issue
After successfully installing Vitess using the Helm chart, we wanted to run a tpcc bench using sysbench in order to validate that we have acceptable performances.
However during the
prepare
phase of the sysbench we're facing an ever growing memory by thevttablet
container until the pod restarts.Reproduction Steps
Create a kubernetes cluster v1.13.11-gke.14
Configure Helm (v2.14.3), init tiller with cluster-admin role
Install the etcd operator
warn: I've created a storage class in order to use ssd storage
prepare
commandwarn: You will have to expose the vtgates, here I used a GCP internal loadbalancer (here the hostname is
vitess
)After a few seconds we can see that the vittablet memory usage reaches the limit and the pod is restarted
I tried to build a new
vitess/lite
image using the master branch in order to add this change #5444But that didn't help.
Operating system and Environment details
Kubernetes cluster running on GKE version v1.13.11-gke.14
Helm version v2.14.3
Sysbench version 1.0.18
Log Fragments
On slack @makmanalp suggest me to provide dumps in order to get more info for debug.
Please find them here
The text was updated successfully, but these errors were encountered: