Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Customize default number of free API slots for NDBAPI apps #2

Closed

Conversation

asaintsever
Copy link

Another PR to be able to customize the currently hardcoded number of free api slots for NDBAPI apps.

@asaintsever
Copy link
Author

I confirm the code being submitted is offered under the terms of the OCA, and that I am authorized to contribute it.

@mysql-oca-bot
Copy link

Hi, thank you for your contribution. Your code has been assigned to an internal queue. Please follow
bug http://bugs.mysql.com/bug.php?id=106634 for updates.
Thanks

@lkshminarayanan
Copy link
Contributor

Hey @asaintsever! We have already added support for this in our upcoming release along with the ability to scale up MySQL Servers without doing a full rolling restart of all management and data nodes if sufficient free API slots are available. Thanks again for your contribution!

@asaintsever
Copy link
Author

asaintsever commented Mar 9, 2022

Hello @lkshminarayanan

Do you know if your next release will allow to set the resources (cpu, mem, ephemeral-storage requests/limits) on the deployments generated by the operator?

I ask since I encountered several times eviction of my MySQL pods because of following errors:

4m53s       Warning   EvictionThresholdMet     node/ip-xxx.eu-west-1.compute.internal              Attempting to reclaim ephemeral-storage
4m53s       Warning   Evicted                  pod/xyz-cluster-mysqld-847d69c775-8jrx6                   The node was low on resource: ephemeral-storage. Container mysqld was using 40Ki, which exceeds its request of 0.

Currently, the only workaround is to deploy a global ResourceQuota resource to set requests.ephemeral-storage and limits.ephemeral-storage but the drawback is that such setting applies to all pods using emptyDir volumes. Here I would like to restrict such configuration to the MySQL resources.

Being able to control resources used by the workloads generated by the operator is considered a good practice. Is it on your roadmap?

@asaintsever
Copy link
Author

After further investigation, it appeared my issue was simply due to an undersized worker node (not enough storage). But my point of being able to set resources on generated resources is still open.

@lkshminarayanan
Copy link
Contributor

Hi @asaintsever,

Yes, that is something we are currently working. The plan is to allow setting requests and limits for all type of nodes(i.e the Management Nodes, Data Nodes and MySQL Servers), via the NdbCluster spec. In their absence, the Operator will try to deduce and set sensible resource requests in the pod spec when creating the Workloads.

@lkshminarayanan
Copy link
Contributor

lkshminarayanan commented May 22, 2022

Hi @asaintsever,

A new version, v0.2.0 is now available. We have added support for specifying data node config, nodeSelector, resource/limits , num of free API Slots via the NdbCluster spec. (And a lot more improvements in general). There is a limitation w.r.to the data node config, nodeSelector, resource/limits - you cannot update them once you have created the NdbCluster object. Handling an update to these fields turned out to be a not so trivial task. We are currently working on it and it will most probably be available in the next release (which won't take as long as 0.2.0 did). Do try out v0.2.0 and let us know if you have any issues/request. Thank you!

@asaintsever
Copy link
Author

asaintsever commented May 27, 2022

Hi @lkshminarayanan!

Thanks for this release. I took some time to browse the code and saw that you ended up with a rather similar implementation as what I proposed in my PRs. This is cool as this eased the migration effort.

I updated one of my chart and did some basic tests on a small minikube cluster (no time right now to run longer tests on a big EKS cluster). Looks ok so far :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants