Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Facilitate ConfigMap rollouts / management #22368

Open
bgrant0607 opened this issue Mar 2, 2016 · 214 comments
Open

Facilitate ConfigMap rollouts / management #22368

bgrant0607 opened this issue Mar 2, 2016 · 214 comments

Comments

@bgrant0607
Copy link
Member

@bgrant0607 bgrant0607 commented Mar 2, 2016

To do a rolling update of a ConfigMap, the user needs to create a new ConfigMap, update a Deployment to refer to it, and delete the old ConfigMap once no pods are using it. This is similar to the orchestration Deployment does for ReplicaSets.

One solution could be to add a ConfigMap template to Deployment and do the management there.

Another could be to support garbage collection of unused ConfigMaps, which is the hard part. That would be useful for Secrets and maybe other objects, also.

cc @kubernetes/sig-apps-feature-requests

@bgrant0607
Copy link
Member Author

@bgrant0607 bgrant0607 commented Mar 23, 2016

cc @pmorie

@thockin
Copy link
Member

@thockin thockin commented Mar 23, 2016

This is one approach. I still want to write a demo, using the live-update
feature of configmap volumes to do rollouts without restarts. It's a
little scarier, but I do think it's useful.
On Mar 2, 2016 9:26 AM, "Brian Grant" notifications@github.com wrote:

To do a rolling update of a ConfigMap, the user needs to create a new
ConfigMap, update a Deployment to refer to it, and delete the old ConfigMap
once no pods are using it. This is similar to the orchestration Deployment
does for ReplicaSets.

One solution could be to add a ConfigMap template to Deployment and do the
management there.

Another could be to support garbage collection of unused ConfigMaps, which
is the hard part. That would be useful for Secrets and maybe other objects,
also.

cc @kubernetes/sig-config
https://github.com/orgs/kubernetes/teams/sig-config


Reply to this email directly or view it on GitHub
#22368.

@bgrant0607 bgrant0607 added this to the next-candidate milestone Mar 23, 2016
@bgrant0607
Copy link
Member Author

@bgrant0607 bgrant0607 commented Mar 23, 2016

@thockin Live update is a different use case than what's discussed here.

@therc
Copy link
Contributor

@therc therc commented Mar 23, 2016

I think live updates without restarts might fall under my issue, #20200.

@bgrant0607
Copy link
Member Author

@bgrant0607 bgrant0607 commented Mar 25, 2016

@caesarxuchao @lavalamp: We should consider this issue as part of implementing cascading deletion.

@bgrant0607
Copy link
Member Author

@bgrant0607 bgrant0607 commented Mar 25, 2016

Ref #9043 re. in-place rolling updates.

@lavalamp
Copy link
Member

@lavalamp lavalamp commented Mar 25, 2016

Yeah I think it should be trivial to set a parent for a config map so it automatically gets cleaned up.

(Why not just add a configmap template section to deployment anyway? Seems like a super common thing people will want to do.)

@caesarxuchao
Copy link
Member

@caesarxuchao caesarxuchao commented Mar 25, 2016

@lavalamp, I guess you mean we can set replicas sets as the parent of a config map, and delete the config map when all the replica sets are deleted?

@bgrant0607
Copy link
Member Author

@bgrant0607 bgrant0607 commented Mar 25, 2016

@kargakis
Copy link
Member

@kargakis kargakis commented Mar 30, 2016

Recent discussion:
https://groups.google.com/forum/#!topic/google-containers/-em3So0KBnA

Thinking out loud: In OpenShift we have the concept of triggers. For example when an image tag is referenced by a DeploymentConfig and there is a new image for that tag, we detect it via a controller loop and update the DeploymentConfig by resolving the tag to the full spec of the image (thus triggering a new deployment since it's a template change). Could we possibly do something similar here? A controller loop watches for configmap changes and triggers a new deployment (we would also need to support redeployments of the same thing since there is no actual template change involved - maybe by adding an annotation to the podtemplate?)

@bgrant0607
Copy link
Member Author

@bgrant0607 bgrant0607 commented Mar 30, 2016

Fundamentally, there need to be multiple ConfigMap objects if we're going to have some pods referring to the new one and others referring to the old one(s), just as with ReplicaSets.

@rata
Copy link
Member

@rata rata commented Mar 30, 2016

On Wed, Mar 30, 2016 at 01:56:24AM -0700, Michail Kargakis wrote:

Recent discussion:
https://groups.google.com/forum/#!topic/google-containers/-em3So0KBnA

Thinking out loud: In OpenShift we have the concept of triggers. For example when an image tag is referenced by a DeploymentConfig and there is a new image for that tag, we detect it via a controller loop and update the DeploymentConfig by resolving the tag to the full spec of the image (thus triggering a new deployment since it's a template change). Could we possibly do something similar here? A controller loop watches for configmap changes and triggers a new deployment (we would also need to support redeployments of the same thing since there is no actual template change involved - maybe by adding an annotation to the podtemplate?)

(I posted the original mail in the thread on the google group)

I think making a deployment is the best way, too. Because you can have a syntax
error or whatever in the config and the new nodes hopefully won't start and the
deployment can be rolled back (or, in not common cases I suspect, even do a
canary deployment of a config change).

But I'm not sure if the configmap should be updated, as you propose, or if it
should be a different one (for kube internals, at least). As, in case you do a
config update with a syntax error a pod will be taken down during the
deployment, a new up that fail and now there is no easy way to rollback because
the configmap has been updated. So, probably you need to update again the
configmap and do another deploy. If it is a different configmap, IIUC, the
rollback can be done easily.

@rata
Copy link
Member

@rata rata commented Apr 1, 2016

Sorry to bother again, but can this be tagged for milestone v1.3 and, maybe, a lower priority?

@rata
Copy link
Member

@rata rata commented Apr 6, 2016

@bgrant0607 ping?

@thockin
Copy link
Member

@thockin thockin commented Apr 6, 2016

What work is needed, if we agree deployment is the best path ?

On Tue, Apr 5, 2016 at 5:10 PM, rata notifications@github.com wrote:

@bgrant0607 https://github.com/bgrant0607 ping?


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#22368 (comment)

@bgrant0607 bgrant0607 modified the milestones: v1.3, next-candidate Apr 6, 2016
@bgrant0607
Copy link
Member Author

@bgrant0607 bgrant0607 commented Apr 6, 2016

@rata Sorry, I get zillions of notifications every day. Are you volunteering to help with the implementation?

@thockin We need to ensure that the parent/owner on ConfigMap is set to the referencing ReplicaSet(s) when we implement cascading deletion / GC.

@rata
Copy link
Member

@rata rata commented Apr 6, 2016

@bgrant0607 no problem! I can help, yes. Not sure I can get time from my work and I'm quite busy with university, but I'd love to help and probably can find some time. I've never really dealt with kube code (I did a very simple patch only), but I'd love to do it :)

Also, I guess that a ConfigMap can have several owners, right? I think right now it can be used in several RSs and that should be taken into account when doing the cascade deletion/GC (although maybe is something obvious).

Any pointers on where to start? Is someone willing to help with this?

PS: @bgrant0607 sorry the delay, it was midnight here when I got your answer :)

@pmorie
Copy link
Member

@pmorie pmorie commented Apr 6, 2016

@rata

But I'm not sure if the configmap should be updated, as you propose, or if it should be a different one (for kube internals, at least).

If we manage kube internals with deployments, we have to find the right thing to do for both user-consumed configs and internals.

Also, I guess that a ConfigMap can have several owners, right?

@bgrant0607 I also have the same Q here -- I think we will need to reference-count configmaps / secrets since they can be referred to from pods owned by multiple different controllers.

I think right now it can be used in several RSs and that should be taken into account when doing the cascade deletion/GC (although maybe is something obvious).

Cascading deletion has its own issues: #23656 and #19054

@caesarxuchao
Copy link
Member

@caesarxuchao caesarxuchao commented Apr 6, 2016

@rata, I'm working on cascading deletion and am putting together a PR that adds the necessary API, including the "OwnerReferences". I'll cc you there.

@rata
Copy link
Member

@rata rata commented Apr 6, 2016

@caesarxuchao thanks!

@rata
Copy link
Member

@rata rata commented Apr 6, 2016

On Wed, Apr 06, 2016 at 09:37:25AM -0700, Paul Morie wrote:

@rata

But I'm not sure if the configmap should be updated, as you propose, or if it should be a different one (for kube internals, at least).

If we manage kube internals with deployments, we have to find the right thing to do for both user-consumed configs and internals.

Sure, but I guess the same should probably work for boths, right?

I imagine for example the "internals" configMaps to use a name like
-v<kube-version/commit hash>.

This way, when you upgrade the configmap will became orphan and it should be
deleted, right? Or am I missing something?

I think this can work for boths

I think right now it can be used in several RSs and that should be taken into account when doing the cascade deletion/GC (although maybe is something obvious).

Cascading deletion has its own issues: #23656 and #19054

Ohh, thanks!

@bgrant0607
Copy link
Member Author

@bgrant0607 bgrant0607 commented Apr 10, 2016

@caesarxuchao @rata We'll likely need a custom mechanism in Deployment to ensure that a referenced ConfigMap is owned by the generated ReplicaSets that reference it.

@kfox1111
Copy link

@kfox1111 kfox1111 commented Oct 6, 2020

Thats what the KEP proposes. You create a configmap and you mark your deployment as wanting to snapshot it and watch it.

Then if the configmap ever changes, the deployment gets a new version kick off automatically just like when you change the deployment.

@conrallendale
Copy link

@conrallendale conrallendale commented Oct 6, 2020

Humm, sincerely I don't follow the KEP, but are you saying that modifying the configmap would create another replica set? I don't think is a good idea. My idea was only to have a "ConfigMapGenerator "object that wokrs only as a "provider". On deploy (or DS, or STS), instead of using a ConfigMap you'd use the ConfigMapGenerator. Only when you change the deploy the replicaset would be created (and the ConfigMap couped with it). If you change de deploy again, another replicaset and another configmap would be created. The ConfigMaps would bem "garbage collected" with the replicasets.

@kfox1111
Copy link

@kfox1111 kfox1111 commented Oct 6, 2020

Hmm... If the KEP is unreadable thats a problem. We should figure out how to fix it.

Lets walk through what the KEP says with a more concrete example (Or at least what I attempted to say). Say I upload a configmap:

...
metadata:
  name: foo
data:
  mysetting: v1

and a Deployment:

...
spec:
  volumes:
  - name: foo
    configMap:
      name: foo
      watch: true
      snapshot: true

I'd get a second configmap:

metadata:
  name: foo-replicaset1-xxxxx (or something)
data:
  mysetting: v1

and I'd get a ReplicaSet with:

...
spec:
  volumes:
  - name: foo
    configMap:
      name: foo-replicaset1-xxxx

If I then updated the foo configmap to have mysetting: v2,
then the deployment would notice, create configmap:

metadata:
  name: foo-replicaset2-xxxx
data:
  mysetting: v2

and a new ReplicaSet:

...
spec:
  volumes:
  - name: foo
    configMap:
      name: foo-replicaset2-xxxx

So there would be the foo configmap that the user can edit untouched,
2 immutable configmaps associated with 2 ReplicaSets.
When the Deployment garbage collects the ReplicaSet it also garbage collects the corresponding configmap.

So as far as the user is concerned, they just make changes to their configmap and it takes effect. They can roll back a version of the deployment and it will also just work as it will always refer to its snapshot.

@jhgoodwin
Copy link

@jhgoodwin jhgoodwin commented Oct 6, 2020

I disagree with adding flags to opt into the behavior most people expected from the objects in the first place.
Users should who define a deployment + configmap, expect it to update when either changes so that the current state matches the config. It's unexpected that killing a pod is required to make the system match the current state.

If people want to use such flags to opt out, that's another story entirely, but I suspect no one will use them.

@djupudga
Copy link

@djupudga djupudga commented Oct 6, 2020

I was into this a while back, but now I am not. The reason is config and deployment are two separate resources. Like pod is not deployment. Config maps are related to pods, not deployments. To facilitate this, I believe, a new deployment resource type would be needed and a new controller. One that somehow encapsulates deployment and config.

@jhgoodwin
Copy link

@jhgoodwin jhgoodwin commented Oct 6, 2020

If deployments can consume a resource to create things under it's management, it should also demand a callback for when that resource changes, otherwise things it claims to mange are not well managed.

@kfox1111
Copy link

@kfox1111 kfox1111 commented Oct 6, 2020

@jhgoodwin We can't break backwards compatability. Adding the flags allow backwards compat. Maybe someday when there is a Deployment/V2, the defaults can be flipped around to be better. But we can't do it in v1.

I believe killing the pod is the best approach in general. Otherwise you run the risk of having random configs across your deployment that you can't track. But you can implement that today with the existing behavior. This feature is all about having a clean, well orchestrated, well known state. The existing configmap/secret stuff doesn't easily enable that.

@lavalamp
Copy link
Member

@lavalamp lavalamp commented Oct 6, 2020

I think it'll be less confusing for future civilizations if y'all have this conversation on the KEP, I left my thoughts there :)

@kfox1111
Copy link

@kfox1111 kfox1111 commented Oct 6, 2020

@djupudga so is ReplicaSets. You can do everything that Deployments do with just ReplicaSets. What Deployments do is add an Orchestration layer around performing a rolling upgrade. I believe it is just an incomplete orchestration layer, as it does the right thing so long as you don't have config files. If you do, then its inconsistent in its rolling forward/backwards without a bunch of user interaction, which, IMO is what it was designed to avoid, making users do manual things.

Yes, you could add yet another orchestration layer on top of deployments to handle it. Now to teach a user to debug something, you have to explain FooDep generates Configmaps,Secrets and Deployments which generate ReplicaSets which generate Pods. Honestly, I kind of prefer Daemonsets/Statefulsets that hide the Versioning stuff. I kind of wish Deployments did that too. Its a detail most users shouldn't ever need to see.

@conrallendale
Copy link

@conrallendale conrallendale commented Oct 13, 2020

I don't know kubernetes internals, but how a configmap is mounted inside a container? A directory is created and then referenced on container creation? I've been thinking in something much more simple than discussed here before: create a field on "volumes" field on PodSpec with name "configMapRef", like the one on envFrom. This way, the configMap files would be "copied" inside the container on creation and then started. The files would be "standalone", no linked to the configmap, and consequently would not be read only. Some logic would be needed for rollback/rollout, however. Would this be possible?

@kfox1111
Copy link

@kfox1111 kfox1111 commented Oct 13, 2020

The problem is that of new, old pods. Here's an example:
Say I upload version A of a config file.
Then upload Deployment with 3 replicas.
Then I update configmap and trigger a rolling upgrade. It starts to delete/create new pods with the new config. Then I notice something wrong and try and issue a rollback of the deployment.
It will start deleting the new pods and launching the old version pods, but still will be pointing at the new configmap. Some of the pods that stuck around will be in config A state and some in config B state even though they are in the same ReplicaSet. There are ways of reaching this state such as node evacuations.

This problem can't be solved at a pod level as pods come and go. It has to be solved by having config be consistent aligned with a ReplicaSet.

This can be worked around by the user by creating configmaps with unique names, updating the deployment to match the right configmap name, and garbage collecting unused configmaps, but that is a lot of work. Thats what the proposal is about. let that toil be handled by k8s itself like it does for Deployments -> ReplicaSets rather then the user just using ReplicaSets.

@conrallendale
Copy link

@conrallendale conrallendale commented Oct 13, 2020

Hummm, so is this what happens with envFrom? On ReplicaSet there is only a reference to the ConfigMap. I had the misconception that the envFrom config would be converted on ReplicaSet to explicit env configs.

@conrallendale
Copy link

@conrallendale conrallendale commented Oct 13, 2020

I've been thinking in volume types like:

A:

configMap:
   name: myConfigMap

B:

configMapRef:
  name: myConfigMap

C:

configMapLiterals:
  config.toml: |
    [server]
    host = http://example.com

A is the current case. Deploy would have all the 3 types. ReplicaSet only A and C. On Deploy apply, B would be converted to C on ReplicaSet.

Just and idea =)

Edit: Obviously, I know that volume field belongs to the PodSpec, so the "only have A and C" would be just a validation or something like.

@kfox1111
Copy link

@kfox1111 kfox1111 commented Oct 13, 2020

Hummm, so is this what happens with envFrom? On ReplicaSet there is only a reference to the ConfigMap. I had the misconception that the envFrom config would be converted on ReplicaSet to explicit env configs.

envFrom is only used by a pod. Deployment/ReplicaSet don't do anything with them. As far as I know, Deployment/ReplicaSet really only touches the pods metadata section a bit currently.

@conrallendale
Copy link

@conrallendale conrallendale commented Oct 13, 2020

So this doesn't creates a "state", right? I don't see a use case where someone would want this behavior. For volumes ok, you can have live reload when the config file is updated. But, if envs don't have live reload, so why keeping the reference instead of converting to env literals? By chance, no transformation on PodSpec between deploy and replicasets is a hard requirement on kubernetes?

@conrallendale
Copy link

@conrallendale conrallendale commented Oct 13, 2020

Implementing a "configMapLiterals" or simply "literals" volume type would solve this issue. It would be created like a EmptyDir but with some files defined "in line". No ConfigMap would be created, so no need for a garbage collector.

This could be implemented first, and on a second moment the "configMapRef" like I described before. With configMapRef would be possible to reference the configmap on many deploys. Another idea would be not creating a configMapRef on PodSpec, but a config on DeploymentSpec indicating which ConfigMap must be converted to literals. Something like:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: makito-grafana
spec:
  replicas: 5
  convertConfigMapToLiterals:
   - myConfigMap1
   - myConfigMap3
  [...]

This way, on creating replicasets from deploys, configMaps references on envFrom through configMapRef would be converted to "env" fields, and configMap references on volumes would be converted to volume type "literals".

@kfox1111
Copy link

@kfox1111 kfox1111 commented Oct 13, 2020

env literals only work well if you've put a lot of effort inside a container to convert every config option to an environment variable rather then just passing the whole config through as a file. I prefer the latter. Significantly less effort and better compatibility at the same time.

@conrallendale
Copy link

@conrallendale conrallendale commented Oct 13, 2020

I think I don't get your point. Just to be clear: what am I saying, is, if you create something like:

  convertConfigMapToLiterals:
   - myConfigMap
  template:
    spec:
      containers:
      - envFrom:
        - configMapRef:
            name: myConfigMap

with a ConfigMap like:

data:
  key1: value1
  key2: value2

will produce a ReplicaSet like:

  template:
    spec:
      containers:
      - env:
        - name: key1
          value: value1
        - name: key2
          value: value1
@kfox1111
Copy link

@kfox1111 kfox1111 commented Oct 13, 2020

Yup. I'm just saying, generally I avoid using env variables at all in my containers as it gets you into a config anti-pattern where you have to end up writing a bunch of code in your container to read all the env vars and then copy them into a config file that the program reads. Then a bunch more code to test that that code works reliably. if you just mount a configmap as a volume, you eliminate all the intermediate logic and just let the program be configured directly. No mapping code needed.

Note, I containerize a lot of existing code, not write new code so it may not apply so much to new code.

@conrallendale
Copy link

@conrallendale conrallendale commented Oct 13, 2020

Hey, I was talking about only the env part, not the volume part. Like I said before, this convertConfigMapToLiterals would convert the envFrom ConfigMap and the volume type ConfigMap to the volume type Literal (to be created yet). So, a deploy like:

  convertConfigMapToLiterals:
   - myVolumeConfigMap
  template:
    spec:
      volumes:
      - name: config
        configMap:
          name: myVolumeConfigMap

with a ConfigMap like:

data:
  config.toml: |
    [server]
    host = "http://example.com"
  config.local.toml: |
    [server]
    host = "http://anotherexample.com"

would be converted on ReplicaSet like:

      volumes:
      - name: health
        Literal:
          config.toml: |
            [server]
            host = "http://example.com"
          config.local.toml: |
            [server]
            host = "http://anotherexample.com"
@kfox1111
Copy link

@kfox1111 kfox1111 commented Oct 13, 2020

Ah, ok. I misunderstood. sorry.

That could work. The one main drawback I see to the configmap literal thing would be that if you had a large number of pods, you'd be duplicating your config into etcd by that large number of pods. But maybe thats the tradeoff we need to make to get someone on the k8s team to sign off on it though?

@kfox1111
Copy link

@kfox1111 kfox1111 commented Oct 13, 2020

I guess there's one other issue... whenever I talk about configmaps, I mean, configmaps or secrets. a literal would work for a configmap but maybe not a good idea for secrets as that part of etcd isn't encrypted at rest.

@mrak
Copy link

@mrak mrak commented Oct 14, 2020

The problem is that of new, old pods. Here's an example:
Say I upload version A of a config file.
Then upload Deployment with 3 replicas.
Then I update configmap and trigger a rolling upgrade. It starts to delete/create new pods with the new config. Then I notice something wrong and try and issue a rollback of the deployment.
It will start deleting the new pods and launching the old version pods, but still will be pointing at the new configmap. Some of the pods that stuck around will be in config A state and some in config B state even though they are in the same ReplicaSet. There are ways of reaching this state such as node evacuations.

This problem can't be solved at a pod level as pods come and go. It has to be solved by having config be consistent aligned with a ReplicaSet.

This can be worked around by the user by creating configmaps with unique names, updating the deployment to match the right configmap name, and garbage collecting unused configmaps, but that is a lot of work. Thats what the proposal is about. let that toil be handled by k8s itself like it does for Deployments -> ReplicaSets rather then the user just using ReplicaSets.

This is exactly the issue we deal with. We have resorted to appending the names of all of the configmaps that an application uses with the application's release version. This results in a lot of old configmap clutter that we have to build additional machinery around to clean up, but it results in consistent configuration expectations between each rollout and potential rollbacks.

@acobaugh
Copy link

@acobaugh acobaugh commented Oct 14, 2020

Has anyone mentioned https://github.com/stakater/Reloader yet? We've been using that with great success for the last ~2 years. It Just Works, and you forget it's even running.

@conrallendale
Copy link

@conrallendale conrallendale commented Oct 14, 2020

I guess there's one other issue... whenever I talk about configmaps, I mean, configmaps or secrets. a literal would work for a configmap but maybe not a good idea for secrets as that part of etcd isn't encrypted at rest.

Correct me if I wrong, but I don't think there is a use case where someone want to restore an old secret. I'm assuming that secrets are only used for credentials, certs, and things like that. If someone is using secrets to manage configs, then it is being used wrongly IMO. Generally I use ConfigMaps to generate all the config necessary and use env vars inside these configs, and then gerenate env vars from the secrets.

@conrallendale
Copy link

@conrallendale conrallendale commented Oct 14, 2020

Has anyone mentioned https://github.com/stakater/Reloader yet? We've been using that with great success for the last ~2 years. It Just Works, and you forget it's even running.

If I have understood correctly, this doesn't solve this issue. Reloader only recreate the pods on ConfigMap/Secret changes, right? If so, then no rollout/rollback is supported. By the way, I think Reloader, kustomize and other tools would benefit from this change IMO.

Particulary, I think we have to choose some of the proposed approaches mentioned in this issue and take it forward. I proposed it, so I'm a little biased, but I think that the creation of a volume type "Literal" is the simplest approach proposed here and solves all the cases mentioned.

It would be interesting if all the participants here gave their opinions on this approach and would be more interesting if the kubernetes team tell us if this is even possible.

@bgrant0607 @lavalamp @kargakis

@kfox1111
Copy link

@kfox1111 kfox1111 commented Oct 14, 2020

Correct me if I wrong, but I don't think there is a use case where someone want to restore an old secret. I'm assuming that secrets are only used for credentials, certs, and things like that. If someone is using secrets to manage configs, then it is being used wrongly IMO. Generally I use ConfigMaps to generate all the config necessary and use env vars inside these configs, and then gerenate env vars from the secrets.

I don't typically use env vars as they don't always play very well with existing software. Often existing software also mixes config and secrets into the same file and don't often support reading from env within the config file. So quite a few times I've needed to put the entire config in a secret rather then a configmap because at least part of the config is a secret mandating the use of a secret over a configmap. This is often the case when connection strings to say databases are used. They often mix the server and port info into a url along with the password: mysql://foo:bar@servername. Sometimes its convenient to assemble in an initContainer with both a configmap and a secret, but not always.

So, its not so simple IMO. If your designing all new software then its easy to keep the delineations between configmaps and secrets clean. When you are dealing with existing software, its often not so clean and not easily changed.

So I don't really see much real difference between a secret and a configmap for usage other then if a whole config or a bit of a config is sensitive in any way it belongs in a secret.

@conrallendale
Copy link

@conrallendale conrallendale commented Dec 12, 2020

Correct me if I wrong, but I don't think there is a use case where someone want to restore an old secret. I'm assuming that secrets are only used for credentials, certs, and things like that. If someone is using secrets to manage configs, then it is being used wrongly IMO. Generally I use ConfigMaps to generate all the config necessary and use env vars inside these configs, and then gerenate env vars from the secrets.

I don't typically use env vars as they don't always play very well with existing software. Often existing software also mixes config and secrets into the same file and don't often support reading from env within the config file. So quite a few times I've needed to put the entire config in a secret rather then a configmap because at least part of the config is a secret mandating the use of a secret over a configmap. This is often the case when connection strings to say databases are used. They often mix the server and port info into a url along with the password: mysql://foo:bar@servername. Sometimes its convenient to assemble in an initContainer with both a configmap and a secret, but not always.

So, its not so simple IMO. If your designing all new software then its easy to keep the delineations between configmaps and secrets clean. When you are dealing with existing software, its often not so clean and not easily changed.

So I don't really see much real difference between a secret and a configmap for usage other then if a whole config or a bit of a config is sensitive in any way it belongs in a secret.

Sorry for the late response. I was on a middle of job transition, so I didn't have much time to follow this issue.

I think we're not going to found a perfect solution here. It's very clear at this point that the kube team are not open for such a big change on the way configmap works, generating new configmaps automatically, which would be the best solution.

My proposal here is to select the simplest solution, even if the applications have to adapt. So a combination of the "literal" volume type with the use of secrets on env vars would be the option to solve the change on config issue (change of version and rollouts).

Basically, the only necessary modification would be to create the "literal" volume type, in a way that the config could be specified inline. Is it a perfect solution? Of course not, but would be simple to add (I think), and would allow to adapt softwares to work with.

@2rs2ts
Copy link
Contributor

@2rs2ts 2rs2ts commented Dec 15, 2020

@conrallendale I don't think that would solve the problem especially when configuration data gets larger than what can fit in a single k8s object, it but it would be an improvement for sure and I would support it as a first iteration

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Workloads
  
In Progress
Linked pull requests

Successfully merging a pull request may close this issue.

None yet