Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updating a project's environment variables doesn't actually update those variables within running containers #1728

Closed
cdchris12 opened this issue Mar 17, 2020 · 4 comments · Fixed by #1923
Assignees
Labels
2-build-deploy Build & Deploy subsystem

Comments

@cdchris12
Copy link
Contributor

Describe the bug
Whenever an environment variable is created or updated within a project, any running pods are not redeployed. This is due to the fact that the env vars are changed in the lagoon-env configmap, which is referenced by the pod. Since this configmap is not actually a part of the pod that's changing, the pod hasn't actually changed and isn't redeployed as a result.

To Reproduce
Steps to reproduce the behavior:

  1. Create a new environment variable with a global scope
  2. See that the newly created environment variable is not reflected in any of the project's containers.

Expected behavior
When environment variables are updated, containers should be redeployed accordingly to reflect any updates to those variables.

@cdchris12 cdchris12 added the 2-build-deploy Build & Deploy subsystem label Mar 17, 2020
@Schnitzel
Copy link
Contributor

I don't think we want to automatically redeploy the containers if an environment variable is added. As this could cause two possible issues:

  1. if you add 10 environment variables it would generate 10 deployments, which is rather weird
  2. If you add an environment variable to a project, we would need to redeploy all environments, which could cause a LOT of issues.

Therefore I would still keep the requirement that when an environment variable is updated during the next deployment (either triggered via push or manually via API/UI) the deployment system checks if the environment variables have been changed and then triggers a redeploy of all containers.
A possible solution for this would to inject and hash of the configmap content as an annotation into every deploymentconfig/statefulset/deamonset pod config, which then would automatically trigger a redeploy if the hash has changed.

@AlexSkrypnyk
Copy link
Contributor

@Schnitzel
Could you please explain how this currently works, i.e. what should be done after adding a variable using lagoon CLI. Thank you

@Schnitzel
Copy link
Contributor

Schnitzel commented Apr 16, 2020

@AlexSkrypnyk
Currently it works like that:

  • adding an environment variable via the lagoon cli will not trigger a redeployment of the environment, they are just stored in the Lagoon API.
  • on every deployment for an environment, Lagoon loads the current existing environment variables from the Lagoon API and stores them inside a configmap in kubernetes.
  • pods are loading the environment variables from this configmap
  • in order for a pod to get updated environment variables from this configmap a pod needs to be restarted
  • pods are only restarted during a deployment if the underlining docker images have changed (for performance reasons) OR if environment_variables.git_sha: 'true' is set and the git sha has changed.

Therefore:

  • adding an environment variable via the lagoon cli will not cause the pods to automatically restart, not immediately and also not in the next deployment.
  • In order to get pods to see possible new environment variables you need to:

a) force a change in the docker images (add a new file, change a file, etc. - just an empty commit might not cause a new docker image as the .git folder might be ignored with .dockerignore files in the repo)
b) an empty commit and set environment_variables.git_sha: 'true' in the .lagoon.yml, see https://lagoon.readthedocs.io/en/latest/using_lagoon/lagoon_yml/#environment_variablesgit_sha this causes to lagoon to force restart the pods every time the git sha changed (which you need to force via an empty git commit)
c) ask the amazee.io support team to restart the pods manually for you.

A) and B) are preferred solutions.

@seanhamlin
Copy link
Contributor

This issue has bitten a large customer in region, and I would be a fan of seeing this prioritised. I like the idea of storing a hash of the config map, and then redeploying all pods if the hash changes on the next deploy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
2-build-deploy Build & Deploy subsystem
Projects
None yet
5 participants