-
Notifications
You must be signed in to change notification settings - Fork 147
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add initializer spec #175
Add initializer spec #175
Conversation
Thanks, @13013SwagR 🙂 So one case is with dapr, described in #164, but I'm not fully clear dapr usage really depends on a separate spec for initializer or if it was resolved with container specification fix. I'd like to confirm this first. Sadly, I didn't have time to test this myself yet. Are there other cases? PS this discussion is more fit for an issue and not a PR I guess 🙈 |
After some thinking, I am not sure the custom initializer configuration is required... In conclusion, I think the initializer Job should really be just an init container on the Starter Pod which solve my issue and the dapr one for the initializer. |
As I described there, "the problem" here is related on how kubernetes understand that a given job is completed. Since dapr is a sidecar (i.e one more container within the same pod) and the annotations for the runners and initializers are the same, the sidecar will be present in both, runners and initializers. The big difference is because when using runners I'm able to call the dapr shutdown using its api (see one example at dapr) but as the initializers are totally out of my control I can't call shutdown, meaning that from the kubernetes point of view, the initializer job as a whole still pending even when the initializer container actually was successfully executed (because daprd still running). Furthermore, the operator leverages the kubernetes status to make sure the job is completed, see here. Conclusion: The operator will not work when the initializer job has sidecar containers because the job isn't considered completed until all the containers are exited with code 0. |
Thank you for the clarifications, @mcandeia!
This part is unlikely to change even with resolution of #138. So yes, some alternative is clearly needed.
This pod is required for the k6 Cloud output support. It can't be merged with the starter: these are clear separate stages that must be executed sequentially. I've tested this with Envoy / scuttle: there's actually another bug which was also reported recently in #179 - I added the fix in PR #182. As for request to make initializer job separate, we have three cases so far:
Given that, I think it makes sense to make initializer job separate at this point of time. More complex logic would be to make initializer job run only for specific cases but that requires some more thinking, esp. given the limitation with the logs (described above with Envoy). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@13013SwagR, this PR could use a rebase 🙂
Otherwise, the code changes themselves seem to be fine; but the main drawback of this change is that it forces people to specify initializer spec where they didn't have to before. This is true for basically all fields. For instance, if one was passing an imagePullSecrets
in the runner spec - now they might need to also pass it to the initializer spec. Ideally, we should re-use runner spec when initializer spec is absent.
I've added a workaround to achieve that: could you please try to incorporate it in your PR? It could use some more testing but by my initial tests, the workaround should work.
Hello @13013SwagR, This issue with initializer job got a higher internal priority lately. No pressure but could you please let me know if you'll be able to work on this PR in the coming weeks? Thank you! |
Hey sorry for the late response, will update in the next days. |
… values from runner
…er specification"" This reverts commit 5cec748.
714dec6
to
e33f914
Compare
Co-authored-by: Olha Yevtushenko <yorugac@users.noreply.github.com>
Co-authored-by: Olha Yevtushenko <yorugac@users.noreply.github.com>
Hey @yorugac, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @13013SwagR, for coming back to this!
It looks OK to me; will be merging in 👍
Description
This PR adds the independent management of the initializer pod spec to the K6 object spec.
Motivation and Context
The initializer pod has different requirements than the runner pods.
In this case it doesn't need the Istio sidecar, but I would like to have it for the runner.
fixes #172
How Has This Been Tested?
I only adapted the current unit test to make it pass, wanted to see the reception of PR first. I can make a more complete unit test suite if required.
I ran a manual test in my own k8s setup with Istio.
Types of changes