Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce verbosity of kubernetes extension config with inheritance or more opinionated defaulting #34025

Closed
holly-cummins opened this issue Jun 13, 2023 · 24 comments · Fixed by #34487
Assignees
Milestone

Comments

@holly-cummins
Copy link
Contributor

Description

I've just been writing some config for the kubernetes extension, for an app in the superheroes sample.

I was surprised how much of it there was. See, for example, https://github.com/quarkusio/quarkus-super-heroes/blob/1049a71741404c98e5a122e9fc9e7181516d2ce3/rest-fights/src/main/resources/application.properties#L76. Part of the issue is that the sample has config for knative, openshift, minikube, and kubernetes. From reading the documentation, there doesn't seem to be any inheritance between the namespaces, so if I configure a property for kubernetes, I then have to replicate all the config for openshift, even though openshift is a specialisation of kubernetes.

I also wonder whether some of the properties being configured could be defaulted?

See also discussion in https://quarkusio.zulipchat.com/#narrow/stream/187030-users/topic/DRYer.20config.20for.20kubernetes.20extension/near/364657444.

Implementation ideas

No response

@holly-cummins holly-cummins added the kind/enhancement New feature or request label Jun 13, 2023
@quarkus-bot
Copy link

quarkus-bot bot commented Jun 13, 2023

/cc @Sgitario (kubernetes), @geoand (knative,kubernetes,minikube,openshift), @iocanel (knative,kubernetes,openshift)

@quarkus-bot
Copy link

quarkus-bot bot commented Jun 13, 2023

You added a link to a Zulip discussion, please make sure the description of the issue is comprehensive and doesn't require accessing Zulip

This message is automatically generated by a bot.

@geoand
Copy link
Contributor

geoand commented Jun 14, 2023

@radcortez is there anything in Smallrye Config we could leverage here?

@radcortez
Copy link
Member

We don't have any notion of namespace inheritance in config like described here. Mappings do support a Base mapping to share properties between multiple ConfigRoots (currrently, each specialization has its own root). That would required to rewrite the current Roots.

An easier approach could be to use the fallbacks: https://smallrye.io/smallrye-config/3.3.0/extensions/fallback/#use-smallrye-config-in-a-java-application

For instance, we can write a fallback function to replace openshift with kubernetes in the property name. Config will first look for the original property name and then fallback to another name if the original one cannot be found.

@radcortez radcortez self-assigned this Jun 14, 2023
@holly-cummins
Copy link
Contributor Author

Subscribing @edeandrea too :)

The fallback option sounds ideal, @radcortez.

@edeandrea
Copy link
Contributor

edeandrea commented Jun 15, 2023

Thank you @holly-cummins for looping me in.

I'm not sure fallbacks would help here, unless I'm not understanding how they work. These particular properties are read at build time (not runtime) by various extensions at different points in time during the build process (during the maven package phase). Different extensions are looking at various namespaces.

Maybe using an example would help clarify if this indeed could be a viable solution?

quarkus.kubernetes.part-of=fights-service
quarkus.kubernetes.env.configmaps=${quarkus.application.name}-config
quarkus.kubernetes.env.secrets=${quarkus.application.name}-config-creds
quarkus.kubernetes.labels.app=${quarkus.application.name}
quarkus.kubernetes.labels.application=${quarkus.kubernetes.part-of}
quarkus.kubernetes.labels.system=quarkus-super-heroes

quarkus.openshift.part-of=${quarkus.kubernetes.part-of}
quarkus.openshift.env.configmaps=${quarkus.kubernetes.env.configmaps}
quarkus.openshift.env.secrets=${quarkus.kubernetes.env.secrets}
quarkus.openshift.labels.app=${quarkus.kubernetes.labels.app}
quarkus.openshift.labels.application=${quarkus.kubernetes.labels.application}
quarkus.openshift.labels.system=${quarkus.kubernetes.labels.system}

How would the fallback option work when the mvn package phase is run with -Dquarkus.kubernetes.deployment-target=kubernetes,minikube,openshift,knative?

@radcortez
Copy link
Member

A Fallback is just a function or mapping rule that allows Config to lookup the value in another property if the original property didn't get a value. We use this in a number of scenarios. For instance you can use both quarkus.profile or smallrye.config.profile to set the running profile, but we have a fallback rule from quarkus.profile to smallrye.config.profile, meaning that quarkus.profile will get the value from smallrye.config.profile if a quarkus.profile is not found.

In this case, you are doing something similar with property expansion. We can set a simple fallback rule like quarkus.openshift.labels.application -> quarkus.kubernetes.labels.app. Now, consider the following scenarios:

quarkus.kubernetes.labels.app=my-app
  • quarkus.kubernetes.labels.app value is my-app
  • quarkus.openshift.labels.application value is my-app due to the fallback rule
quarkus.kubernetes.labels.app=my-app
quarkus.openshift.labels.application=another-app
  • quarkus.kubernetes.labels.app value is my-app
  • quarkus.openshift.labels.application value is another-app (no fallback rule applied)

In a way, the fallback mechanism allows you to implement some sort of inheritance. You can add multiple fallback rules like:

  • quarkus.kubernetes.labels.app -> quarkus.application.name
  • quarkus.openshift.labels.application -> quarkus.kubernetes.labels.app
  • quarkus.knative.labels.application -> quarkus.openshift.labels.application

How would the fallback option work when the mvn package phase is run with -Dquarkus.kubernetes.deployment-target=kubernetes,minikube,openshift,knative?

Not sure if I got the question, but fallback is done at the property name level in the current Config instance, by rewriting the property name with the fallback rule and performing another lookup (or multiple ones if fallback rules are chained). What targets a fallback is not finding the value on the original property name lookup.

I believe that the fallback mechanism should help here, but maybe there is some logic that I'm not aware on how you deal with the different targets.

@edeandrea
Copy link
Contributor

Looking at the docs you sent it looks like I would need to create a FallbackConfigSourceInterceptor class within the application's source code, correct? And then register it in the application's META-INF/services/io.smallrye.config.ConfigSourceInterceptor file?

My question is - does this work for configuration properties which are read at build time? And then if so, for the super-heroes, since there are 5 apps, we'd have to have 5 copies of this class and config scattered throughout the source tree?

@radcortez
Copy link
Member

Looking at the docs you sent it looks like I would need to create a FallbackConfigSourceInterceptor class within the application's source code, correct? And then register it in the application's META-INF/services/io.smallrye.config.ConfigSourceInterceptor file?

Correct.

My question is - does this work for configuration properties which are read at build time? And then if so, for the super-heroes, since there are 5 apps, we'd have to have 5 copies of this class and config scattered throughout the source tree?

What you want to do here is to provide the fallback rules at the extension level, so you only have one copy, and the interceptor is applied to all phases. If I understood correctly, the "inheritance" rules do not change per application, so they can be added at the extension level.

@edeandrea
Copy link
Contributor

What you want to do here is to provide the fallback rules at the extension level

I don't quite understand what you mean here. Does that mean create my own extension and then somehow share it somehow amongst all the other applications?

@radcortez
Copy link
Member

No, it's to add this feature directly into our kubernetes extension.

@radcortez
Copy link
Member

Let me know what is the desired behaviour with each system specialization and I'll handle the rest :)

@Sgitario
Copy link
Contributor

I like the idea of the fallback configuration option whose general rule is:

  • OpenShift/Kind/Minikube/Knative config will fallback the configuration from Kubernetes Vanilla config

However, my main concern is that is a breaking change that is not very transparent to users. Users having configured only the Kubernetes manifests will suddenly see that this configuration is also used for the OpenShift manifests. Though, we might add a big note in the migration guide.

An alternative to the fallback configuration option is to use the same concept as in data sources or rest-clients, the multitenant configuration. We could add a base configuration at the root level (quarkus.kubernetes-common...) and keep the existing configuration as they are, for example: quarkus.kubernetes... for vanilla, quarkus.openshift... for OpenShift only, etc. All these configurations already inherits a PlatformConfiguration class, so we could use the fallback option here to inherit the quarkus.kubernetes-common properties).

I still prefer the idea of the fallback approach, but the multitenant solution embraces the same concept as in data sources or rest-clients and it would not be a breaking change.

@holly-cummins
Copy link
Contributor Author

holly-cummins commented Jun 16, 2023

@Sgitario that's a good point about breaking changes ... although I suspect it might be a rare case. It would have to be:

  • People are using both OpenShift (etc) and vanilla Kubernetes
  • People have a property defined for Kubernetes, but want to use the default for OpenShift

At first I thought that would be quite rare, but I guess (taking OpenShift in particular) the scenario might be something like "Vanilla Kubernetes is a bag of bits so I need to manually configure lots of stuff, but OpenShift is more secure out-of-the-box so I get sensible config 'for free'."

The kubernetes-common scheme has a few (minor) drawbacks:

  • Making a distinction between kubernetes and kubernetes-common seems odd ... they're all kubernetes, so semantically, the inheritance hierarchy should have kubernetes at its root
  • If the goal is to reduce verbosity, we've just added back 7 characters into each property name :)
  • It might increase the complexity of our documentation to have to explain to people that they should type 'kubernetes-common' for most properties unless they want to avoid inheritance, in which case they should use 'kubernetes'. It means that instead of being transparent and nice, the inheritance feature needs to be exposed really prominently, to the extent where it starts impacting other areas of the docs and lowering the overall usability

So for me, I'd vote for kubernetes as the parent. But if I was a user who got broken, I'd perhaps vote differently. :)

@Sgitario
Copy link
Contributor

The kubernetes-common scheme has a few (minor) drawbacks:

And also, the user would need to learn about this new property which is not ideal either.

So for me, I'd vote for kubernetes as the parent. But if I was a user who got broken, I'd perhaps vote differently. :)

Yes, I think that as long as we properly document this breaking change in the migration guide, it should be good.

@radcortez
Copy link
Member

However, my main concern is that is a breaking change that is not very transparent to users. Users having configured only the Kubernetes manifests will suddenly see that this configuration is also used for the OpenShift manifests. Though, we might add a big note in the migration guide.

Currently, because you need to configure everything, if you only have kubernetes configured, it means you only used kubernetes as a target. If you are using both kubernetes and openshift, it means you are already required to configure the openshift properties (which in this case they wouldn't fallback to kubernetes). If you are adding a new target, then you get the fallbacks. We certainly need to document it, but at first glance, I don't think it is a big breaking change, but I may be missing something, since I'm not an experienced user of the kubernetes extension.

@edeandrea
Copy link
Contributor

What about a simple flag that the user can set to say whether or not they want the fallback? By default it would be off, so it would be backwards compatible.

@holly-cummins
Copy link
Contributor Author

What about a simple flag that the user can set to say whether or not they want the fallback? By default it would be off, so it would be backwards compatible.

Wouldn't it be simpler for them to just provide the properties if they didn't want the fallback? They'd have to that anyway if they opted to not have the fallback. The one exception is properties where a user would choose to set it for kubernetes and not for openshift, and I'm not sure we have a good example of what those properties might be. It feels a bit hypothetical so far.

Every knob we provide adds complexity for users, so I think we need to be very sure a knob is needed before introducing it.

@radcortez
Copy link
Member

Wouldn't it be simpler for them to just provide the properties if they didn't want the fallback? They'd have to that anyway if they opted to not have the fallback. The one exception is properties where a user would choose to set it for kubernetes and not for openshift, and I'm not sure we have a good example of what those properties might be. It feels a bit hypothetical so far.

Correct. If we really want to make things super clear, we can add a log stating that property x from openshift is using property yfallback from kubernetes.

@Sgitario
Copy link
Contributor

Sgitario commented Jun 30, 2023

Some updates about this issue and possible solutions:

  • The fallback config source interceptor from here is not working for me. You can view what I did (for Openshift only) in this commit.

  • The other solution is using the defaultValue option. The limitation is that this does not work for maps or child config objects, so we would need to use one or the another programmatically. You can view what I did in this commit.

@radcortez did I do something wrong to implement the fallback option?
I don't really like the defaultValue option because it implies LOTS of changes :/

@radcortez
Copy link
Member

  • The fallback config source interceptor from here is not working for me. You can view what I did (for Openshift only) in this commit.

What is not working exactly? Remember, Fallback only kicks in when a property in the original name cannot find a value. So what you are saying is that if we cannot find a value in quarkus.kubernetes.*, fallback to quarkus.openshift.*, but if I understood correctly, what we want is the other way around: if a value cannot be found for quarkus.openshift.*, fallback to the default of quarkus.kubernetes.*.

@Sgitario
Copy link
Contributor

  • The fallback config source interceptor from here is not working for me. You can view what I did (for Openshift only) in this commit.

what we want is the other way around: if a value cannot be found for quarkus.openshift.*, fallback to the default of quarkus.kubernetes.*.

I think this is what I did (unless you spot the opposite from the changes of the commit).

When I said it's not working, it's because I provide a quarkus.kubernetes.* property (for example: quarkus.kubernetes.node-port) and spite I see the fallback implementation did the mapping (I see this debugging), however the OpenshiftConfig nodePort did not have the kubernetes nodePort value during the build step.

@radcortez
Copy link
Member

@Sgitario try this one: #34487

@Sgitario
Copy link
Contributor

Sgitario commented Jul 5, 2023

@Sgitario try this one: #34487

taking a look into this now, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants