Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to restart crd/pods? #128

Closed
pcm32 opened this issue Dec 18, 2020 · 5 comments
Closed

How to restart crd/pods? #128

pcm32 opened this issue Dec 18, 2020 · 5 comments
Assignees
Labels
question Further information is requested

Comments

@pcm32
Copy link

pcm32 commented Dec 18, 2020

Hi!

We recently needed to update the container for the runner. After pushing that container and editing our crd.yaml for the runner-pool (and applying the change with kubectl apply -f crd.yml, the new pod with new update config wouldn't come up after deleting the previously existing pod with kubectl delete pod/<runner-pod-name>.

I tried then to delete the crd from k8s and re-create it (with kubectl delete -f crd.yml and kubectl create -f crd.yml), however now I see:

$ kubectl get GithubActionRunner/runner-pool
NAME          CURRENTPOOLSIZE
runner-pool

(it used to say 1)

and no pod starts. Any ideas of how I should go around this process of updating? Thanks!

@pcm32
Copy link
Author

pcm32 commented Dec 18, 2020

I see this in the log of the operator pod:

2020-12-18T00:34:01.854Z	INFO	controllers.GithubActionRunner	Reconciling GithubActionRunner	{"githubactionrunner": "default/runner-pool"}
2020-12-18T00:34:02.208Z	INFO	controllers.GithubActionRunner	Pods and runner API not in sync, returning early	{"githubactionrunner": "default/runner-pool"}

@davidkarlsen
Copy link
Collaborator

davidkarlsen commented Dec 18, 2020

If u delete the pod the updated resource will be applied (technically it is not the CRD per se, but a resource specified by the CRD - a CR). #22 touches into this. As pods get old and cycle the new definition will take place. #86 also touches into this, as a policy could be implemented to sort which pods to delete 1st (the oldest or newest ones, being idle).

Hopes this helps.

@davidkarlsen davidkarlsen added the question Further information is requested label Dec 18, 2020
@davidkarlsen davidkarlsen self-assigned this Dec 18, 2020
@pcm32
Copy link
Author

pcm32 commented Dec 18, 2020

Thanks @davidkarlsen. I could make it work by deleting with kubectl delete GithubActionRunner/runner-pool and then changing the name in the crd.yaml for the GithubActionRunner (say, runner-pool-2) and re-applying, that worked. But apparently something was left in some stale state associated to that initial name after the deletes. Trying with the previous name always produced the empty field (and no pods) for the action runner.

@davidkarlsen
Copy link
Collaborator

Another option is to just delete the idle runners, which will make the operator pick up the new definition. Deleting the whole pool will of course also end in runners being redefined and pick up your reconfig.

@davidkarlsen
Copy link
Collaborator

Feel free to close if you are happy with the answers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants