-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to restart crd/pods? #128
Comments
I see this in the log of the operator pod:
|
If u delete the pod the updated resource will be applied (technically it is not the CRD per se, but a resource specified by the CRD - a CR). #22 touches into this. As pods get old and cycle the new definition will take place. #86 also touches into this, as a policy could be implemented to sort which pods to delete 1st (the oldest or newest ones, being idle). Hopes this helps. |
Thanks @davidkarlsen. I could make it work by deleting with |
Another option is to just delete the idle runners, which will make the operator pick up the new definition. Deleting the whole pool will of course also end in runners being redefined and pick up your reconfig. |
Feel free to close if you are happy with the answers. |
Hi!
We recently needed to update the container for the runner. After pushing that container and editing our crd.yaml for the runner-pool (and applying the change with
kubectl apply -f crd.yml
, the new pod with new update config wouldn't come up after deleting the previously existing pod withkubectl delete pod/<runner-pod-name>
.I tried then to delete the crd from k8s and re-create it (with
kubectl delete -f crd.yml
andkubectl create -f crd.yml
), however now I see:(it used to say 1)
and no pod starts. Any ideas of how I should go around this process of updating? Thanks!
The text was updated successfully, but these errors were encountered: