Join GitHub today
Add a limited generic YAML resource to the provider #195
First up, hear me out - I think this is a good approach and has serious upside for a large number of users. I know this has been looked at before and I've spend time reading those threads and hope that this approach is different enough to warrant consideration.
What doesn't this solve?
Creation of standard types (
What does it solve?
This allows users to create other exotic types in Kubernetes through terraform, the main aim is to support CRDs (currently testing with an
It follows a similar model to the existing Azure TF provider in which rich types are provided but a 'fallback' is provided to supply JSON directly to the Azure API's. Similar to that resource it has heavy limitation (updates are Create/deletes, YAML must only contain a single resource and there are no outputs)
Why is it different from past attempts?
The code doesn't require any knowledge of the resources its creating. Using the
This means that, like the ARM Deployment resource, it can support new K8s resource going forward without code changes.
Secondly the code avoids using odd parsing and reflection methods on the YAML and the Clusters version of the resource to spot diffs. It uses a
Currently this is manually tested. The code is commented to aid understand but I haven't updated the provider docs. If there is interest in taking this forward I will:
That would be a great addition!
What about using JSON instead of YAML to be able to leverage terraform's
Maybe both languages could be supported and the resource be named something like
Edit: current implementation does not seem too much tied to YAML so maybe just a
@lawrencegripper thanks for this PR, this is a common ask (see #150, #141, #109 and also terraform-provider-k8s), and definitely something we are interested in doing. We also do something similar for the
Due to the way the type and diffing system was implemented in Terraform 0.11 and earlier, we couldn't get the UX to work quite the way we wanted, but in Terraform 0.12, coupled with some of the improvements to the type system which allow for better nested complex types, we think we can provide this capability in a way that fits in much cleaner with how Terraform users expect it to work (and how Kubernetes natives expect it to work), but it requires changes to the Terraform Provider SDK to enable those new features.
You've also run in to some of the complications with the
We have an ongoing project internally, where we have collected the requirements for this new resource, and are just waiting for 0.12 to be released (and the new maintainer for this provider, @alexsomesan to ramp up somewhat), before working on the proposal for implementation. I would think once 0.12 has landed, and "baked" somewhat, we will probably be posting the proposal on this repository for community feedback.
@paultyng awesome, great to hear there will be improvements that can be made to this when TF 0.12 arrives. In the meantime I think the UX of the code in this PR does fit the brief of matching expectations of TF and K8s users.
I've recorded a quick run-through of using the resource. First up I use TF to create the resource in the cluster then I use Kubectl to delete the resource and show TF recreating it. Next I edit the resource with Kubectl (changing port 80->81) and then re-run TF. As the edit has caused the
Regarding the 3way merges and kubectl apply, I spent a bit of time using govendor to pull in the
One thing that put me off this approach was that it didn't seem to fit with my expectations of how the Terraform provider should work (these may well not be right). I have an expectation that TF follows a Desired Configuration approach and will fix any drift from the declared configuration and the actual running state. As such, if I declare a YAML definition for a resource I expect that to be what is running. I didn't feel like users would expect manual edits made to the resource through
Another limitation of attempting to use the 3 way merge approach is that the provider has no knowledge of what fields in the YAML support updating. Essentially you would need to dynamically decide which fields are likely to require
That being said, if the 3 way merge support is a blocker to this work being merged I'm happy to revisit the problem and attempt to vendor in the
I think the issue with this, is that users aren't the only thing editing the YAML. Admission controllers, scaling systems, etc., are all modifying resources, so there is potential for a lot of non-user drift in the configuration.
An ingress is an interesting example, in a cloud provider, if you provision an ingress, I would imagine it makes updates with information about the cloud's ingress system and would lead potentially to an update loop?
Oh, it looks like in the recent versions of Kubernetes, their patcher is now exported:
It looks like ksonnet uses a copy and paste of the previous version:
Damn - good spot, hadn't considered that.
I had a play this morning and I believe I have a solution which will work with
I loop through the fields set in the user provided
I then build a
For AdmissionsControllers, like this example mutating controller I wrote, my expectation is that, as the mutation occurs at the time of resource creation, the
To test this worked as expected I added a service into the
The end result is, in my limited testing, it should now play nicely with a mutating admission controller and updates occurring to the status fields of the resource (or other fields not directly set by the user). I still use the
@paultyng Is this something that you would consider merging assuming I add the appropriate tests to prove out this behavior and update the docs accordingly?