-
Notifications
You must be signed in to change notification settings - Fork 265
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HPA kills new pods instantly after creation #561
Comments
Additional information: I suspect that ArgoCD detects a diff in the deployment.yml (replicas: 1 != replicas: 3) and then adjusts this again. So removing the |
@lorrx thanks for issue and the updates 🙏 , if you have found a solution that works both in Argo CD and via helm directly on a k8s cluster, please feel free to submit a PR to correct the issue. |
@lorrx if you can, could you try pointing your Argo CD Sorry if I'm over-explaining, but just in case you need the info, it would be something like this for your source:
repoURL: 'https://github.com/jessebot/nextcloud-helm'
targetRevision: fix/dont-set-replicas-in-pod-if-hpa-enabled
path: charts/nextcloud/ If you're using an Argo Project, then you'll also need to add https://github.com/jessebot/nextcloud-helm as an allowed source repo. Let me know if this works for you and we can work on getting that PR merged 🙏 Thanks! |
update: I think we're actually good to merge the above PR based on #596 (comment), which would auto-close this Issue. If that happens, and it's still broken, we can absolutely re-open this Issue, or you can open a second one. Either way, happy to help :) |
Many thanks for the improvement. I will test the fix, but it might take some time. I will open the issue again or create a new one if something does not work. |
Describe your Issue
When I activate HPA in the helmet chart, a pod is initially scheduled (which is correct). As soon as I synchronize files, the CPU load of this pod increases to >60%. So HPA tries to schedule new pods. These are then also scheduled, but killed again immediately after container creation. This means that there is never more than one pod running at the same time, although there should be 5.
Logs and Errors
There are no errors in the logs. The termination seems to be caused by Kubernetes itself.
Describe your Environment
Kubernetes distribution: k3s
Helm Version (or App that manages helm): ArgoCD version v2.10.7+b060053
Helm Chart Version: 4.6.6
values.yaml
:Additional context, if any
I found a possible solution for this issue. As mentioned in this StackOverflow article, the
replicas
parameter cannot be used in the deployment resource if a HPA definition is used.I am using NFS as persistant storage with the NFS CSI driver. The PVC has RWX access mode.
The text was updated successfully, but these errors were encountered: