-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Support CRDs #1194
Comments
Hi @cliveseldon can you please post a CR example |
The proto definition: |
@cliveseldon Are you expecting WeaveFlux to automatically substitute the images into the manifests and that's why you think WeaveFlux needs to know about CRDs? The trend (e.g. helm v3 discussions) seems to be that the tool chain should be composable. So my expectation is that people templatize their app using whatever tool they like (ksonnet, kustomize, helm, etc...) and then down stream tools in the app mangement tool chain (e.g. Weave) just consume the fully realized YAML manifests. Within Kubeflow we have a number of places (e.g. hyperparameter tuning) where we need to substitute parameters into some user defined manifest. So I think the direction we are headed is creating an API like getManifests(Git URI) that could return the manifests based on a URI that could be pointing at an app written in any number of templating solutions (e.g. helm or ksonnet). So a tool like WeaveFlux wouldn't necessarily need to know about either 1. The particular structure of the resources or 2. the particular implementation of the templating solution. It would just need an endpoint that knew how to evaluate one or more templating solutions |
@jlewi That makes sense. Maybe I'm not clear then why you would need WeaveFlux if the image repos and manifests have been updated downstream the remaining functionality is to simply push those new Manifest to the k8s cluster over the k8s API. Maybe you can show how the API you mention should work with WeaveFlux? I'm not clear:
|
Hopefully the Weave folks will chime in but my understanding is that WeaveFlux automates applying the manifests which is the functionality I'm looking for.
So here's what I have in mind.
In this case, the Deployments in the Git Repo are updated manually. I would automate that upstream of WeaveFlux using a combination of Prow, Cron, and Argo. So as a specific example, we'd like to automatically push the latest Kubeflow changes to dev.kubeflow.org. Here's how that might work
We could even have our Argo workflow emit the YAML manifests and check those in so that we aren't blocked on lack of ksonnet support in Argo. /cc @thejaysmith |
The UI seems to be more in line with the premium version. There is a fluxctl CLI tool available on the standalone version. I have nearly completed converting my Flux YAMLs to a ksonnet app so we'll see how it works. The Argo integration will be an interesting piece to try out. Interested in hearing the opinion of the WeaveWorks team. |
This is a really interesting discussion, and echoes discussions we've had internally about the design and purpose of flux (and latterly, the helm operator). You can look at flux as being three things, which are munged together for those historical reasons we're always hearing about.
Now, 2) and 3) obviously go together quite closely, but 1) is really its own thing that could stand alone. So you can treat the question of supporting CRDs (or Helm, or ksonnet) as having two parts: firstly, can flux apply the given flavour of config to the cluster in a sensible way? Secondly, can image updates be supported for the given flavour of config? By "sensible" I mean that we can maintain the invariant Anyway, my answers thus far look like this:
When I use "needs interpretation" above, I mean that flux has to have code that deals specifically with that kind of config -- so for example, we had to bake in code for updating FluxHelmRelease resources, since they necessarily differ from the regular Kubernetes resources. In the case of ksonnet, and to some extent Helm, the image update automation may be not that important, since they have their own units of deployment which aren't images -- we may have got that slightly squint in the Helm operator design, I'm not sure yet. (In fact it may turn out that automated image updates are a pretty specialised requirement, that we happened to need ourselves for our own infrastructure.) |
IMO I think the tool chain should be composable. So a tool that does #1 applies config in git to a cluster shouldn't be closely coupled to the templating solution (e.g. ksonnet, helm, jinja). It seems to me that a good way to achieve this would be to make YAML the common interface. So in the case of ksonnet the flow would be
A separate tool could be used to automate the above steps. Then we just need to implement this tool for different templating solutions. The biggest barrier to automating the above (at least for Kubeflow) is automating the git workflow i.e.
That tooling is specific to ones source control and review system. In our case we use GitHub and Prow. |
I am for this. Regardless of what templating solution is used, there needs to be a common YAML. We have a simple python script to convert the YAML into Jsonnet (I used that in part to create the ksonnet package for Flux) so just improving that could be used to revert jsonnet or whatever into a standard k8s friendly YAML. For the larger issue, it seems like step 1 requires us to still use ksonnet so it forces us to maintain the jsonnet language and translate YAML back to a ksonnet format. |
To respond to the original post:
|
For Seldon's CRDs the images will be in contained PodTemplateSpecs which are part of the CRD. |
Hi, as this represents a new feature it is outside of the maintenance policy for Flux v1 to add this now. The good news is, it is already a supported feature in Flux v2. Automation is controlled by custom resources, and image tags in custom resources can be safely updated by As this was classified as an enhancement, it would fall outside of the maintenance policy for Flux v1 (edit: the Flux v1 FAQ should cover this with an entry at the top, I will fix this soon) which is in maintenance mode now. Please migrate to Flux v2 and you're welcome to request any missing features or not well-covered use cases through the discussions page. Respecting that you may have already moved on (hopefully to Flux v2), I am closing this issue for now. Thanks for using Flux! |
To allow Flux to be used with tools such as seldon-core that uses Custom Resource Definitions. This would also facilitate surrounding projects such as kubeflow creating "GitOps" Machine Learning e2e pipelines: kubeflow/kubeflow#971 as well as other examples such as #1038
The text was updated successfully, but these errors were encountered: