-
Notifications
You must be signed in to change notification settings - Fork 366
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updating plugin to use helm version 3 #378
Conversation
* Switch to sigs.k8s.io/yaml for yaml unmarshalling This allows for the correct behavoiur of string-key only nested maps from values being correctly type defined as map[string]interface{} rather than map[interface{}]interface{} * go mod tidy
helm/provider_test.go
Outdated
helmHome, err := ioutil.TempDir("", "terraform-acc-test-helm-") | ||
if err != nil { | ||
t.Fatalf("Failed to create new temporary directory for use as helm home: %s", err) | ||
os.Setenv("HELM_REPOSITORY_CONFIG", "/Users/amell/Library/Preferences/helm/repositories.yaml") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could use some advice here. Not sure how these should be set.
return r, nil | ||
entry := file.Get(name) | ||
|
||
// Not sure I agree with the logic here. Should a data source really update an underlying resource every time its called? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure about this behavior, and if it completely makes sense. Would like feed back.
Description: "Repository where to locate the requested chart. If is an URL the chart is installed without installing the repository.", | ||
Description: "Repository where to locate the requested chart. If is a URL the chart is installed without installing the repository.", | ||
}, | ||
"repository_key_file": { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should the repository settings be moved into their own block of settings like kubernetes settings are in the provider?
Seems to be an issue with pods not going into the correct namespace. I have a bunch of pods, and some of them ended up in the default namespace, even though I don't use the default namespace. |
* Swap mergeValues with mergeMaps from helm cli package * Fix kubernetes block configuration Using tempfiles to store cert data since genericclioptions.ConfigFlags don't allow us to set cert data directly and only accepts file paths. This is an incovenience drawn from Helm's k8s client initialization pattern.
if it's the aaronmell/terraform-provider-helm/dev-v3 branch that is the most up to date, yeah I guess I am. I also have tried with the config path removed but I get the same issue. Have you tested terraform cloud? |
TBH, I'll be surprised if the maintainers will ever (mean: anytime soon) accept this PR. Not supporting Helm v2 will not be viable for at least another year. AFAIR accepting this provider into the official list of providers was a painfully slow process and it looks like development hasn't changed either. All in all: I think a maintaining a community fork at this point is justified. Thanks to the new terraform registry we should soon be able to publish providers (eg. |
👋 Thanks so much for taking on this feature @aaronmell. I've just joined the providers team at HashiCorp and I'm going to be one of the maintainers for the Helm provider going forward. I wanted to drop a reply here to say I'm very keen to get v3 in, and that we should be seeing a bit more traction on this repository. We are currently a very small team so we're super appreciative of your contributions, and your patience in waiting for this feature to land. @aaronmell Are you planning on adding anything more to this PR, or is it ready for review? |
@jrhouston I have nothing else to contribute. I know that @relu was working on a few things, but I believe they are optional |
Great hearing the news, @jrhouston! @aaronmell that's right, the import feature I've worked on can be introduced in a future release. I was not able to @Firaenix in a "sterile" environment using the same auth method, though I admit I was not yet able to test it on TF cloud unfortunately. I think some more testing might be necessary and maybe some tidying up here and there, but aside from that it should be good to go. |
Is this under review? I've had success using it and would like to see it merged into the official helm provider. |
Cant wait for this to be merged in. Managing v2 with TLS via Terraform is a pain! Also, can we please have good clear examples of how to configure the provider with various configurations e.g. GKE, EKS. Thank you! |
By the way, I think #359 might be a prerequisite here Also @sagikazarmark sounds great! |
Heya @jakesyl isn't helm 3 "tiller-less"? Why should one need to issue a helm install in the first place? Or am I missing something? |
Yeah- the absence of tiller is why it’s a prerequisite. Tiller is no longer in the cluster which means every operation required a port forward since they’re running outside of the cluster |
@jakesyl tiller is no longer in the cluster because it does not exist anymore, hence there's nothing to port-forward to. The current implementation of Helm v3 uses configmaps or secrets to store release metadata and Helm interacts with Kubernetes directly through the api client. That being said, I believe the issue you mentioned does not apply in our case here. |
You'll need make, git and golang version 1.12 or greater to build the provider |
@aaronmell This is awesome, can't wait for this to get merged in. Do you have any recommendation for cleanly migrating an existing helm2 release within Terraform? I haven't been successful yet doing anything other than deploying with this from a clean state. Which isn't the end of the world, but I always prefer to minimize downtime whenever possible. |
@rigratz I haven't tried upgrading. I believe there is a helm plugin for that, but I haven't looked into it. |
Oh yeah, the plugin works great for a manual migration of a release. But the next time I run my TF pipeline after the migration it tries to 'create' again and hits a conflict because a release with the same name exists but the state doesn't get updated accordingly (since Terraform didn't handle the migration). Probably just a TF state quirk so was hoping maybe you'd run into it before. No worries, thanks for the response! |
@rigratz you can try out this not very well tested import feature support implementation, it worked fine for my use cases: aaronmell#7 |
Excited to see progress on this, I hope this becomes available soon! |
Bump helm dep module version to v3.0.3
🎉 |
Huge thank you to @aaronmell and @relu for this contribution and to all who reviewed and commented. ❤️ |
Best merge I've seen all year! Thanks to all the contributors! |
Now the operative question is what the expectation is on a release timeline ? (“at least” to registry) I would love to get the loading of these binaries from a git submodule out of my TF Cloud runs (would make speculative plans way more usable again) |
@donaldguy The release is running now. 😄 |
Thanks for the awesome work! Upgrading was rather easy. To migrate, I did:
|
* Updated to v3.0 of helm * Set the default driver to secret. * Switch to sigs.k8s.io/yaml for yaml unmarshalling This allows for the correct behavoiur of string-key only nested maps from values being correctly type defined as map[string]interface{} rather than map[interface{}]interface{} * Updated the documentation. Added missing username/password settings for repository * Updated the vendor folder * Removed tests from provider that were no longer needed. * Fix kubernetes block configuration Using tempfiles to store cert data since genericclioptions.ConfigFlags don't allow us to set cert data directly and only accepts file paths. This is an incovenience drawn from Helm's k8s client initialization pattern. * Cleanup and small refactoring of KubernetesConfig struct * Ensure failed releases are tracked in the state Currently when a release is failed the error is returned and no state is preserved even though helm creates the release with the failed status. This means that if one would need to re-create the release they would need to manually delete the failed release via helm cli. * Updated changelog * Adding missing parameter replace to release resource. * Play with kubernetes initialization Using the same logic implemented in the kubernetes provider * Remove unneeded RESTClientGetter type declaration * Refactor kubernetes configuration init * Fix parallel execution of concurrent release test The race condition for creating/deleting the namespace was removed by allocating different namespace names per concurrent test execution. * Add explicit depends_on to helm_release resource * Bump helm dep module version to v3.0.3 Co-authored-by: Aurel Canciu <aurelcanciu@gmail.com>
I'm opening the PR to solicit feedback from the community. Its not ready yet, but its close. I'll annotate some lines that I know need a working solution for.
#299