-
Notifications
You must be signed in to change notification settings - Fork 706
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Develop a unified policy for command like flags vs config file options #1040
Comments
for reference the original proposal that the kublet went for: PR about CLI flags overriding config (kubeadm init): viper as consideration: |
I'll have a more in-depth look at viper. The thing is that having everything either way (config file or cmd line flags) is awkward to support. Having fewer command line flags is a good thing. My bet is to support only those, that are likely to be overridden on a per-machine and/or per-execution basis (node names, certificate files, bootstrap tokens, etc.). My proposed implementation in kubernetes/kubernetes#66968 for kubeadm init follows to a large extent the approach undertaken by kubelet. It's essentially a huge workaround over some of the cobra limitations. Also, no flags are deprecated, they are all simply imported as overrides of the config. |
I think that before addressing the issue at technical level (viper, kubelet like or kubectl like) it would be great to figure out a list of use cases when flags and config files should be used at the same time in kubeadm. This will help in understanding when and why we really need this |
.. from slack |
I have spent some time today looking into viper. At first glance, I am not very impressed with it. Seems to me, that if we try to stick with our current config model, we'll have to do some workarounds. For example, implementing This may just be me, looking at a library which I haven't used at all, but it may just be better to stick to Cobra and provide manual overloading of only selected flags (as opposed to all config flags). |
as much as i looked at it i can confirm the there is also the option to YOLO the viper approach and deprecate the alias flags (i.e. those that we have right now). |
cc @sysrich |
Indeed, and I fear I'm about to add another layer of complexity to the problem, but I hope in doing so we end up with a comprehensive solution
I think there are actually 3 classes or tiers of "configuration", and we need to deal with them in a structured and consistent way. I'd describe the two being discussed so far are "user supplied config files" and "user supplied commandline flags" My strong personal feeling is that all of the options we want the user to be able to define must be able to be set in a configuration file. This configuration file should live in I think commandline flags should be reserved for options which only make sense for "one-time" custom execution parameters. For example the existing kubeadm init Regardless, I think all (or almost all) commandline flags should also be definable in a configuration file. This creates a need to set precedence. I think commandline flags should always have a precedence above that of a config file. I feel my above proposal might serve as a starting point of a policy for this issue, but now I introduce my new layer of complexity. Distributions, or any other solution providers wanting to bundle k8s/kubeadm out of the box, need to have a way of providing their default configuration also. And yet, this configuration needs to be alterable by users. This is made even more complex when you consider many container OS platforms (CoreOS, Kubic, etc) all have read-only filesystems, encompassing either Luckily, systemd provides a model which I think we should consider here (wow I never thought I'd say that ;)) I think we should have "distro provided" configuration files, stored in Whether or not this solution should include a systemd-like drop-in feature where selective parameters from the All applications should therefore consume configuration in the following order of precedence.
Does this sound like a viable way forward? |
i would like to bring Windows in the mix, because kubeadm has plans to support the OS. and i would leave it others to comment more on the default configs in the only thing that maps well is:
|
I think, that there are a few more things to consider here on those grounds:
Therefore, I am a little bit hesitant to introduce pre-defined config file locations in kubeadm. If we do this, we'll end up with more code complexity, since we'll also have to introduce workarounds for Windows and macOS some day. The whole idea behind flags overriding the config file options is to avoid changing the config file if one needs to set the node name or the API server advertise address. |
That might be true for some aspects for some aspects of the configuration files, but I'd describe things like deciding which CRI runtime is being used by kubelet as far more foundational. I might be able to agree with your points for some aspects of the configuration, but at the very least for those related to the choice of runtime, I strongly hold that we need to have tiering of the sort I describe in my earlier post The model is one that is proven to work well in conjunction with other configuration management systems (eg. salt, ansibile, etc), and my experience with such tools strongly suggests they too benefit from knowing where to expect to draw their configuration from. I don't think the argument that "these other tools exist, so we don't need to worry about how to do things consistently ourselves" is a valid train of thought. |
I can agree on the CRI part, but even that can be viewed as part of the deployment recipe - one can wish to deploy a machine as part of one cluster with CRI-O and then decide to reset it and join to another cluster with Docker.
The thing that bothers me in that model is that it may now be |
But the I'm not saying my proposed way is the only way forward, but we need forward and I strongly disagree with your implications that the status quo is acceptable. If Kubernetes doesn't have sensible, standardised, consistent models that allow distributions to provide sane config defaults to then be overridden by users, then the viability of shipping kubernetes as an integrated part of any end-to-end stack will be questionable. I think we need to consider these needs as a natural part of kubernetes growing up to a solution which seeks to be easily integrated across multiple platforms by multiple stakeholders.
And I see the opposite - because so far every single one of the issues I've had getting Kubernetes and kubeadm integrated into openSUSE has been frustrated by the lack of consistency. Which also then prevents me from contributing to the messy incomplete starting points like the upstream rpm specfiles, because real distributions have had to hack so many nasty things to workaround the madness there is no longer any relationship to the rpm defs you see in the k8s git. All of which conspires together to make things painful when we want to keep up with k8s versions, which change all of the above and suddenly we have weeks or months or work to do before we can deploy the new version to our users, just to have the next k8s version out before we've completed that work.. and the cycle continues ad infinitum.... This issue seeks to establish a unified policy precisely because Kubernetes needs to 'plant a flag' in the ground so other stakeholders, like Distributions, have a standardised, documented, and well thought out framework for living with. Expecting everything to keep working together nicely with the current status quo of inconsistent flags, config files appearing and disappearing from different locations and being consumed and unconsumed by tools like kubeadm seemingly at random between versions really is not something which can continue indefinitely if we expect to keep on being able to play well with others. The solution doesn't need to be perfect, I'm more than happy to compromise on my proposal, but we need a solution. |
@sysrich thanks for the clarification. I agree with the expressed opinion. As long as all (or at least most of the) distros agree on the selected locations for default config files, I am perfectly fine with this. |
/assign @neolit123 |
as discussed in today's kubeadm office hours i have created a KEP doc for this this feature is outside of the 1.12 scope, but we can try to finish the KEP soon. one big preliminary xref: kubernetes/kubernetes#66649 |
@sysrich thanks for keeping this discussion moving |
@neolit123 I don't think, that forking Viper is the best approach here. We should probably try contributing patches upstream to Viper first. @fabriziopandini @sysrich merging configs from different locations seems like a bad idea to me. There are so many things that could go wrong and hard to find bugs just waiting to happen. If we have multiple configuration files, we should probably just search locations in a particular order and use the first one found. |
viper is largely unmaintained, the idea of homebrewing an alternative received some +1 at sig-architecture. |
Storing settings in the framework's TestContext is not something that out-of-tree test authors can do because for them the framework is a read-only upstream component. Conceptually the same is true for in-tree tests, so the recommended approach is to define configuration settings in the code that uses them. How to do that is a bit uncertain. Viper has several drawbacks (maintenance status uncertain, cannot list supported options, cannot validate the configuration file). How to handle configuration files is currently getting discussed for kubeadm, with similar concerns about Viper (kubernetes/kubeadm#1040). Instead of making a choice now for E2E, the recommendation is that test authors continue to define command line flags as before, except that they should do it in their own code and with better flag names. But the ability to read options also from a file is useful, so several enhancements get added: - all settings defined via flags can also be read from a configuration file, without extra work for test authors - framework/config makes it possible to populate a struct directly and define flags with a single function call - a path and file suffix can be given to --viper-config (as in "--viper-config /tmp/e2e.json") instead of expecting the file in the current directory; as before, just plain "--viper-config e2e" still works - if "--viper-config" is set, the file must exist; otherwise the "e2e" config is optional (as before) - errors from Viper are no longer silently ignored, so syntax errors are detected early - Viper support is optional: test suite authors who don't want it are not forced to use it by the e2e/framework
/assign @rdodev @timothysc |
Storing settings in the framework's TestContext is not something that out-of-tree test authors can do because for them the framework is a read-only upstream component. Conceptually the same is true for in-tree tests, so the recommended approach is to define configuration settings in the code that uses them. How to do that is a bit uncertain. Viper has several drawbacks (maintenance status uncertain, cannot list supported options, cannot validate the configuration file). How to handle configuration files is currently getting discussed for kubeadm, with similar concerns about Viper (kubernetes/kubeadm#1040). Instead of making a choice now for E2E, the recommendation is that test authors continue to define command line flags as before, except that they should do it in their own code and with better flag names. But the ability to read options also from a file is useful, so several enhancements get added: - all settings defined via flags can also be read from a configuration file, without extra work for test authors - framework/config makes it possible to populate a struct directly and define flags with a single function call - a path and file suffix can be given to --viper-config (as in "--viper-config /tmp/e2e.json") instead of expecting the file in the current directory; as before, just plain "--viper-config e2e" still works - if "--viper-config" is set, the file must exist; otherwise the "e2e" config is optional (as before) - errors from Viper are no longer silently ignored, so syntax errors are detected early - Viper support is optional: test suite authors who don't want it are not forced to use it by the e2e/framework
/assign @luxas b/c this overlaps with grand-unified component config. |
the proposal in this doc mostly staled at this point. i can still see some benefit of allowing patches from the CLI vs having a 100 flags + config or only config without granular overrides. |
the policy for kubeadm is tribal knowledge at this point:
this is the case for kubeadm at least. kubelet and other components are sparse:
other notes:
|
Assumptions:
We are looking for a happy medium between "everything is in the CLI flags" and "everything is configuration." This issue should be a formal policy that sorts new options into the "flag and configuration option" and "configuration option only" buckets. The latter is likely for more obscure flags, but where exactly the line between "common" and "obscure" should be firmly delineated.
KEP WIP doc:
https://docs.google.com/document/d/1LSb2Ieb4XrxQ3cG6AEkwpfN1hb1L6pJdb0vMo10F41U/edit?usp=sharing
The text was updated successfully, but these errors were encountered: