-
Notifications
You must be signed in to change notification settings - Fork 470
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ensure environment variables map is ordered #29
Conversation
Nice work! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is good to add regardless: it makes it easier to compare template runs and mostly organizes the resulting variables by purpose (kong.conf naming conventions usually use common prefixes for related functionality).
It doesn't completely address the Job conflict issue. That stems from having init-migrations having no hooks and not getting deleted as such. This unblocks that if configuration in env is unchanged, but any changes there will result in a different template result that stil blocks. My standard practice when running into this is to manually delete the init-migrations Job.
I forget if there's a reason we don't use a hook here--it seems like we should be able to use post-install
, allowing the various wait-for initContainers to hold Kong startup until migrations complete, but maybe there's something I'm missing there. IIRC there was some reason we handle this a bit unusually, but I think that was more about not making this an initContainer itself. @hbagdi do you recall?
If we indeed cannot make this a hook with a delete policy, we should be able to use https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#ttl-mechanism-for-finished-jobs instead. That's marked as alpha, but exists in the current version and has for a while. We may want to review to see if there are outstanding shortcomings with the current implementation.
Since we are dealing with maps, there is zero guarantee that the output yaml will be the same between runs. In most cases this is not an issue, but the env map is used by jobs. In this case `spec.template` is immutable and if the env list order changes, it results in a failed upgrade. So in the construction of the map, we ensure (by ordering the list of keys) that it is idempotent.
f84e85d
to
3a51963
Compare
Branch rebased with with |
This has been on the todo for a while. We should do init-migrations only on install and delete on success. I can't recall if we had that before and we removed it for some reason. Git spelunking would help here. |
Since we are dealing with maps, there is zero guarantee that the output yaml will be the same between runs. In most cases this is not an issue, but the env map is used by jobs. In this case `spec.template` is immutable and if the env list order changes, it results in a failed upgrade. So in the construction of the map, we ensure (by ordering the list of keys) that it is idempotent. From #29
Upgrade fails when
runMigrations: true
When running a standard
helm upgrade
, it fails with error message similar to this:The easy workaround was to set
runMigrations: false
. But I wanted to investigate this and see if we could run the upgrade cleanly without changing values....Since we are dealing with maps, there is zero guarantee that the output yaml will
be the same between runs. In most cases this is not an issue, but the env map is
used by jobs. In this case
spec.template
is immutable and if the env listorder changes, it results in a failed upgrade.
So in the construction of the map, we ensure (by ordering the list of keys) that
it is idempotent.