-
Notifications
You must be signed in to change notification settings - Fork 38.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrate third-party configuration management tools with our config distribution API #2068
Comments
Viper works completely without a config file. Viper can use defaults, confd, environmental, config and flags if available. All are optional. = |
@spf13 thanks for the clarification. Will update accordingly. If you have any other thoughts, feel free to add them. |
I love that we are thinking about this type of stuff. From a Kubernetes point of view:
|
@jbeda I think the first generic feature would be to have etcd/consul backend for k8s and make both be usable from containers. So that a second cluster for both wouldn't be needed. Additionally ACL is needed to prevent config leakage and security issues, when a container has access to the etcd/consul cluster. -> ACL is more an issue with etcd/consul themselves most likely. I think the main need is better documentation and best practices on top of k8s for now. So that people arriving at k8s can use these guidelines to either build a usual docker cluster or jump right on k8s. For now we should try to find a common ground on how to do things like config, secret and general best practices so that we can improve the documentation, provide a guideline/format for docker containers and later on jump into integrating these more closely into k8s. As mentioned in #2030 mainly key rotation and seed secret injection from k8s. |
An additional thought, which came to mind: |
@stp-ip you dragged me over to an interesting idea: When you suggested having etcd/consul backend for k8s it occurred to me that it would be pretty simple to write a proxy for one of these two stores and service requests coming from pods while transparently encrypting/decrypting the values. Unencrypted data would never leave the pod's network, in theory never leaving the host at all. Just throw that in the pile of ways that we could solve these problems. It adds some fragility because the backing stores will most certainly evolve over time, causing code maintenance. I realize that there are a million ways to solve this problem, and @jbeda rightly suggests that the problem domain might not necessarily belong to k8s. I think I could argue both sides of that pretty effectively. But secrets and keys are a big deal, and there should certainly be good documentation of strong patterns if it ends up being a problem left to be solved outside of k8s proper. When left to our own devices, we programmers don't always make the most secure choices. |
I'd like to provide some additional links that might not directly contribute to this discussion, I've filed a ticket for support for dynamic configuration at the systemd level via an ExecutionOneshot= parameter. There's also Stocker which is a very fancy Docker-targeting repository for Environment config. I don't know enough about it and where it would fit in with this ticket to know whether it should be an option up top, but perhaps others can better evaluate this candidate option. |
I love the idea of a confdir volume type. The trick is that we either pick All three have their downsides. On Fri, Oct 31, 2014 at 11:49 AM, Michael Grosser notifications@github.com
|
Going along with my proposal I want to suggest a "solution": As we won't be able enable every tool to work with a given application/environment and additionally the preferences for tools are quite different, I'm proposing the usage of a more generic abstracting solution.
My proposed solution is to go the route of using Data Volume Containers or the equivalent. Therefore each docker base image is relying on default configuration available inside the container in /custom/configuration/. If no volumes are mounted on /custom/, then the default files in /custom/* are used. Additionally I imagine using the filesystem as a way to encourage config updates and guidelines. The Structure for the filesystem:
In the future k8s could provide an integrated way to generate such volumes similar to providing git based volumes. Any objections/suggestions for this best practice idea with a possible future agenda of integration into k8s? |
As a sidenote: Is specifying a tag for a Data Volume Container possible? So that I can say mount project1/nginx-conf:2014-10-10 in the pod definition? |
Any ideas on how to listen to updates via an UPDATE file? New config run touches UPDATE, file update is noticed by container (how?), UPDATE is removed. |
@stp-ip confd does not require a config file, but does require templates and template configs be stored under a configdir, but the configdir and be specified via the command line. |
When it comes to configuration for containers I prefer to pull configs out of etcd or just use a configuration volume. |
@kelseyhightower Thanks for the clarification on confd. With my recent suggestion at #2068 (comment) I think we got a generic way of accomplishing both. The usage of a preferred tool (confd for example) and configuration volumes. It makes images much more modular and interchangeable and introduces separation between the configuration container and the service container. |
this is a solid suggestion that covers most of the use cases I can imagine |
@kelseyhightower do you use a "data-only" container to hold the configuration per container, so for instance an nginx container will have its own nginx-config container? Why not
Are you grabbing a complete config file, e.g. |
Re: convention vs configuration for mounted volumes, if the infrastructure provides the abstraction for the volume changing you don't necessarily need to define a convention (pod authors can specify the necessary binding). The real trick is in Docker decorating the volume definition of the image sufficiently to allow easier automating of that binding. How do things like config volumes play with pods and their templates? When I want to create the pod and parameterized the source of a config volume, how easy is that for the pod consumer? |
@bketelsen allowing easy proxying / virtualization of an etcd key space for a particular application has a lot of advantages the larger your deployment grows - such as scaling the infrastructure cost of running one etcd per app topology into one etcd cluster to N consumers. Kelsey had provided me some examples and it's something we've considered exposing. Being able to encrypt at rest would be ideal (esp if the proxy was a component of your namespace, not part of shared infra) |
This is the basic structure I imagined taking into account #2030: If anything is unclear, I'm around. |
On a sidenote: |
An alternative proposal to distribute secrets to a trusted proxy instead of the pods themselves. Won't cover all the cases, but might cover a lot of them. |
I like the idea of decoupling configuration generation from the distribution and consumption mechanisms. We want to do that regardless what the latter 2 mechanisms look like. Data volume containers were proposed in #831. That seems useful regardless whether we use it for this. @smarterclayton Had a good point of how configuration volumes would affect Pod and PodTemplate specifications. Specifying the configuration image version explicitly in pods improves predictability and transparency, but makes configuration deployment about as heavy-weight as deploying a whole new image. Command-line flags and environment variables passed via Docker have the same problems. If we wanted a way to update configuration volumes without starting a container, that would require a new in-place update mechanism. Do we want the Pod and PodTemplate specifications and even bound pods (and Docker containers) to be updated in place? In-place updates improve application availability, but introduce a lot of complexity into the application management ecosystem. One could update the volume from inside the container. |
@stp-ip Ack to your earlier comment, I am reviewing your linked docker proposal and a slew of other similar work, thanks. |
The umbrella issue for security right now is #4029, with the sub proposals linked.
|
/subscribe and looking forward to dynamic configuration using etcd/consul |
any update on this issue or plan to have it included in the 1.1 k8s release ? |
is there any progress on this topic? I'm wondering what the final strategy would be. Managing this externally via salt, ansible, chef etc. or leverage the existing meta data stores like etcd or even consul. |
Any progress or updates on this? The thread seems eerily quiet. Is it because there's not enough resources to allocate, or because a reasonable solution wasn't found? Because as I see it, some of the bigger workflow improvements for k8s would be this issue and #23896. |
ConfigMaps are what we recommend. Documented here: See here: http://kubernetes.io/docs/user-guide/configmap/ Also, they can be dynamically updated. This fact isn't documented well. See: |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
This is obsolete |
This issue is based on the suggestions in #1553. I thought getting a clean overview for a renewed discussion was worth the additional issue.
Similar to secret distribution #2030 there is a need to dynamicaly configure applications. Producing or changing configuration scripts via cat, sed or other "hacky" tooling is a method I dislike and I would love a cleaner way of supporting this.
Option 1: envconsul/etcdenv
Pros:
Cons:
Option 2: confd
Pros:
Cons:
Option 3: viper
Pros:
Cons:
Option 4: augeas
Pros:
Cons:
Option5: tiller
Pros:
Cons:
Clarification:
Confd:
@kelseyhightower
Is a config file needed for confd to work or could everything be done using commandline flags?
Can multiple config files be generated, if yes how?
Viper:
@spf13 @bketelsen
Is a config file needed for viper to work or could everything be done using commandline flags?
-> Config file and various other configuration possibilities for viper are optional. It can be used straight via commandline flags.
Overall I think the way to go would be to use confd with ENV as default backend. Furthermore containers should be able to understand a backend switch ENV so that switching to etcd/consul is easily done on a per container level at runtime. This enables easy testing and environment switching. With the addition of watchable configuration via confd more dynamic setups are possible.
Looking forward to your thoughts and suggestions.
The text was updated successfully, but these errors were encountered: