New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support managing Deployment resource #3

Open
hashibot opened this Issue Jun 13, 2017 · 50 comments

Comments

Projects
None yet
@hashibot

hashibot commented Jun 13, 2017

This issue was originally opened by @dasch as hashicorp/terraform#13420. It was migrated here as part of the provider split. The original body of the issue is below.


Currently, I have to use somewhat of a hack in order to have Terraform create my Kubernetes deployments and services:

# A module that can create Kubernetes resources from YAML file descriptions.

variable "username" {
  description = "The Kubernetes username to use"
}

variable "password" {
  description = "The Kubernetes password to use"
}

variable "server" {
  description = "The address and port of the Kubernetes API server"
}

variable "configuration" {
  description = "The configuration that should be applied"
}

variable "cluster_ca_certificate" {}

resource "null_resource" "kubernetes_resource" {
  triggers {
    configuration = "${var.configuration}"
  }

  provisioner "local-exec" {
    command = "touch ${path.module}/kubeconfig"
  }

  provisioner "local-exec" {
    command = "echo '${var.cluster_ca_certificate}' > ${path.module}/ca.pem"
  }

  provisioner "local-exec" {
    command = "kubectl apply --kubeconfig=${path.module}/kubeconfig --server=${var.server} --certificate-authority=${path.module}/ca.pem --username=${var.username} --password=${var.password} -f - <<EOF\n${var.configuration}\nEOF"
  }
}

I use the above module when I need to create resources, e.g.:

module "kubernetes_nginx_deployment" {
  source        = "./kubernetes"
  server        = "${module.kubernetes_cluster.host}"
  username      = "${module.kubernetes_cluster.username}"
  password      = "${module.kubernetes_cluster.password}"
  cluster_ca_certificate      = "${module.kubernetes_cluster.cluster_ca_certificate}"
  configuration = "${file("kubernetes/nginx-deployment.yaml")}"
}

This is of course far from perfect: it doesn't support modifying or destroying the resources and is generally brittle.

It would be great if there was either first-class support for Deployment and Service resources or generic support for arbitrary Kubernetes resources through YAML or JSON definitions.

@holoGDM

This comment has been minimized.

Show comment
Hide comment
@holoGDM

holoGDM Jun 28, 2017

There is service support: link but there is not Deployment it could be nice to configure whole my environment from Terraform not only part of it. Please can you add it?

holoGDM commented Jun 28, 2017

There is service support: link but there is not Deployment it could be nice to configure whole my environment from Terraform not only part of it. Please can you add it?

@radeksimko

This comment has been minimized.

Show comment
Hide comment
@radeksimko

radeksimko Jun 28, 2017

Contributor

As mentioned in the original linked issue and elsewhere there are no plans for supporting alpha or beta resources, which is Deployment's case.

I'm happy to revisit this issue once the resource reaches v1 (stable).

Thanks for understanding.

Contributor

radeksimko commented Jun 28, 2017

As mentioned in the original linked issue and elsewhere there are no plans for supporting alpha or beta resources, which is Deployment's case.

I'm happy to revisit this issue once the resource reaches v1 (stable).

Thanks for understanding.

@radeksimko radeksimko changed the title from provider/kubernetes: Support managing Deployment & Service resources to Support managing Deployment resource Jun 28, 2017

@grubernaut grubernaut removed the feature label Jun 28, 2017

@roidelapluie

This comment has been minimized.

Show comment
Hide comment
@roidelapluie

roidelapluie Jul 4, 2017

I think it is time to change that policy. We could have a beta version of this providers which contains deployments?

roidelapluie commented Jul 4, 2017

I think it is time to change that policy. We could have a beta version of this providers which contains deployments?

@roidelapluie

This comment has been minimized.

Show comment
Hide comment
@roidelapluie

roidelapluie Jul 4, 2017

(I mean, because there is now the split providers in 0.10)

roidelapluie commented Jul 4, 2017

(I mean, because there is now the split providers in 0.10)

@radeksimko

This comment has been minimized.

Show comment
Hide comment
@radeksimko

radeksimko Jul 4, 2017

Contributor

@roidelapluie The reasons for not supporting alpha/beta still remain the same even after provider split. The problem wasn't/isn't the codebase or versioning of it. It's the API versioning and promises (or lack of) related to those versions.

TL;DR these reasons are IMO still valid: #1 (comment)

Unfortunately we do not have a good mechanism to deal with versioned APIs in Terraform's core yet. We have discussed it briefly in the team and it is something we want to support eventually, but it's unlikely we'll get to it any time soon.

If you're willing to deal with the problems mentioned in my comment and keen on supporting (potentially) unstable APIs, feel free to fork this provider.

Concrete suggestions on how to deal with versioned APIs in the schema across providers and resources are welcomed over in https://github.com/hashicorp/terraform/issues/new

Thanks.

Contributor

radeksimko commented Jul 4, 2017

@roidelapluie The reasons for not supporting alpha/beta still remain the same even after provider split. The problem wasn't/isn't the codebase or versioning of it. It's the API versioning and promises (or lack of) related to those versions.

TL;DR these reasons are IMO still valid: #1 (comment)

Unfortunately we do not have a good mechanism to deal with versioned APIs in Terraform's core yet. We have discussed it briefly in the team and it is something we want to support eventually, but it's unlikely we'll get to it any time soon.

If you're willing to deal with the problems mentioned in my comment and keen on supporting (potentially) unstable APIs, feel free to fork this provider.

Concrete suggestions on how to deal with versioned APIs in the schema across providers and resources are welcomed over in https://github.com/hashicorp/terraform/issues/new

Thanks.

@roidelapluie

This comment has been minimized.

Show comment
Hide comment
@roidelapluie

roidelapluie Jul 22, 2017

every single tutorial/training is using deployments.

roidelapluie commented Jul 22, 2017

every single tutorial/training is using deployments.

@mingfang

This comment has been minimized.

Show comment
Hide comment
@mingfang

mingfang Jul 26, 2017

I added Deployment support in my fork.
mingfang@50a3086

mingfang commented Jul 26, 2017

I added Deployment support in my fork.
mingfang@50a3086

@roidelapluie

This comment has been minimized.

Show comment
Hide comment
@roidelapluie

roidelapluie commented Jul 26, 2017

@mingfang You are awesome.

@jingweno

This comment has been minimized.

Show comment
Hide comment
@jingweno

jingweno Aug 20, 2017

Any updates on this?

jingweno commented Aug 20, 2017

Any updates on this?

@frosenberg

This comment has been minimized.

Show comment
Hide comment
@frosenberg

frosenberg Sep 6, 2017

@mingfang could you PR this so there is a change this gets into master?

frosenberg commented Sep 6, 2017

@mingfang could you PR this so there is a change this gets into master?

@mingfang

This comment has been minimized.

Show comment
Hide comment
@mingfang

mingfang Sep 6, 2017

@frosenberg The problem is that they won't accept any PRs that implements beta features.

mingfang commented Sep 6, 2017

@frosenberg The problem is that they won't accept any PRs that implements beta features.

@roidelapluie

This comment has been minimized.

Show comment
Hide comment
@roidelapluie

roidelapluie Sep 6, 2017

beta features that everyone needs/uses

roidelapluie commented Sep 6, 2017

beta features that everyone needs/uses

@frosenberg

This comment has been minimized.

Show comment
Hide comment
@frosenberg

frosenberg Sep 6, 2017

frosenberg commented Sep 6, 2017

@luispabon

This comment has been minimized.

Show comment
Hide comment
@luispabon

luispabon Sep 12, 2017

Agree with above. I understand the reasons not to support these, but deployments, cronjobs etc are features of kubernetes that absolutely everyone use on a daily basis. There's little incentive to use a provider that we have to constantly work around.

BC breaks are what semver is for.

luispabon commented Sep 12, 2017

Agree with above. I understand the reasons not to support these, but deployments, cronjobs etc are features of kubernetes that absolutely everyone use on a daily basis. There's little incentive to use a provider that we have to constantly work around.

BC breaks are what semver is for.

@podollb

This comment has been minimized.

Show comment
Hide comment
@podollb

podollb Sep 20, 2017

I also agree, since the majority of people using k8s are using Deployment (and many using CronJob), it would be extremely helpful if TF had support.

podollb commented Sep 20, 2017

I also agree, since the majority of people using k8s are using Deployment (and many using CronJob), it would be extremely helpful if TF had support.

@henning

This comment has been minimized.

Show comment
Hide comment
@henning

henning Oct 28, 2017

I also came here because I wanted to create a deployment using Terraform...
Following the discussion I can somewhat understand that The terraform team doesn't want to go through great lenghts to support something the K8s team declares as beta.

I propose, if it is so useful for all of us using and relying on them so heavily, to check why the K8s team still considers them beta and what we can do to help to get them declared stable.

henning commented Oct 28, 2017

I also came here because I wanted to create a deployment using Terraform...
Following the discussion I can somewhat understand that The terraform team doesn't want to go through great lenghts to support something the K8s team declares as beta.

I propose, if it is so useful for all of us using and relying on them so heavily, to check why the K8s team still considers them beta and what we can do to help to get them declared stable.

@jonmoter

This comment has been minimized.

Show comment
Hide comment
@jonmoter

jonmoter Nov 8, 2017

I encourage you to revisit this policy. Beta objects like Deployments and DaemonSets are used in every production grade Kubernetes cluster that I've come across. If they're not supported in Terraform, it means I can't use Terraform to manage my Kubernetes resources.

I encourage you to think of terms like Alpha or Beta in context of the particular software project. Terraform itself hasn't reached a 1.0 release, but that's because of the bar Hashicorp sets for what 1.0 means. I think the Kubernetes project has a pretty rigorous level of quality for beta features.

I understand there is risk in supporting features that could have breaking changes. But for me, Deployment support is MVP functionality of this provider, given the current reality of how Kubernetes works.

jonmoter commented Nov 8, 2017

I encourage you to revisit this policy. Beta objects like Deployments and DaemonSets are used in every production grade Kubernetes cluster that I've come across. If they're not supported in Terraform, it means I can't use Terraform to manage my Kubernetes resources.

I encourage you to think of terms like Alpha or Beta in context of the particular software project. Terraform itself hasn't reached a 1.0 release, but that's because of the bar Hashicorp sets for what 1.0 means. I think the Kubernetes project has a pretty rigorous level of quality for beta features.

I understand there is risk in supporting features that could have breaking changes. But for me, Deployment support is MVP functionality of this provider, given the current reality of how Kubernetes works.

@zimbatm

This comment has been minimized.

Show comment
Hide comment
@zimbatm

zimbatm Nov 9, 2017

Just to insist a little bit more, I think that the policy of only maintaining stable APIs made sense while all the plugins where released along the terraform source code. In that case, hot-fixing a broken API meant cutting a whole new terraform release and impacting people who might not even use that particular provider.

Now that the plugins have been extracted from the Terraform code based it might make sense to revisit that policy and make it more flexible per provider.

zimbatm commented Nov 9, 2017

Just to insist a little bit more, I think that the policy of only maintaining stable APIs made sense while all the plugins where released along the terraform source code. In that case, hot-fixing a broken API meant cutting a whole new terraform release and impacting people who might not even use that particular provider.

Now that the plugins have been extracted from the Terraform code based it might make sense to revisit that policy and make it more flexible per provider.

@VJftw

This comment has been minimized.

Show comment
Hide comment
@VJftw

VJftw Dec 16, 2017

http://blog.kubernetes.io/2017/12/kubernetes-19-workloads-expanded-ecosystem.html

Deployment and ReplicaSet, two of the most commonly used objects in Kubernetes, are now stabilized after more than a year of real-world use and feedback. SIG Apps has applied the lessons from this process to all four resource kinds over the last several release cycles, enabling DaemonSet and StatefulSet to join this graduation. The v1 (GA) designation indicates production hardening and readiness, and comes with the guarantee of long-term backwards compatibility.

VJftw commented Dec 16, 2017

http://blog.kubernetes.io/2017/12/kubernetes-19-workloads-expanded-ecosystem.html

Deployment and ReplicaSet, two of the most commonly used objects in Kubernetes, are now stabilized after more than a year of real-world use and feedback. SIG Apps has applied the lessons from this process to all four resource kinds over the last several release cycles, enabling DaemonSet and StatefulSet to join this graduation. The v1 (GA) designation indicates production hardening and readiness, and comes with the guarantee of long-term backwards compatibility.

@debovema

This comment has been minimized.

Show comment
Hide comment
@debovema

debovema Jan 5, 2018

Hi @radeksimko,

Does Hashicorp have a roadmap to integrate this new v1 objects ?

Best regards

debovema commented Jan 5, 2018

Hi @radeksimko,

Does Hashicorp have a roadmap to integrate this new v1 objects ?

Best regards

@bassrock

This comment has been minimized.

Show comment
Hide comment
@bassrock

bassrock Feb 22, 2018

@tsloughter I found this one: https://github.com/sl1pm4t/terraform-provider-kubernetes Which seems to have deployments, ingress and is based off a more recent official release.

bassrock commented Feb 22, 2018

@tsloughter I found this one: https://github.com/sl1pm4t/terraform-provider-kubernetes Which seems to have deployments, ingress and is based off a more recent official release.

@acobaugh

This comment has been minimized.

Show comment
Hide comment
@acobaugh

acobaugh Feb 22, 2018

+1 for @sl1pm4t 's fork. It seems to be the most complete out of the ones I've looked at. I've been using daemonsets and deployments in some testing of my own.

acobaugh commented Feb 22, 2018

+1 for @sl1pm4t 's fork. It seems to be the most complete out of the ones I've looked at. I've been using daemonsets and deployments in some testing of my own.

@sl1pm4t

This comment has been minimized.

Show comment
Hide comment
@sl1pm4t

sl1pm4t Feb 28, 2018

Contributor

FYI @bassrock, @acobaugh
I've opened Issues and PR on my fork too, so if you find a feature is missing, feel free to open an issue over there.
Hopefully one day we could converge back on this official provider, but I'm not hopeful it will be soon.
Aside from supporting the extra resources (Deployment, DaemonSet, Ingress etc), my fork removes some of the seemingly arbitrary limitations that are imposed in the offical one, for example my fork allows use of internal kubernetes annotations, whereas this official provider does not.
My fork has been in daily use managing our production infrastructure for the past 6 months.

Contributor

sl1pm4t commented Feb 28, 2018

FYI @bassrock, @acobaugh
I've opened Issues and PR on my fork too, so if you find a feature is missing, feel free to open an issue over there.
Hopefully one day we could converge back on this official provider, but I'm not hopeful it will be soon.
Aside from supporting the extra resources (Deployment, DaemonSet, Ingress etc), my fork removes some of the seemingly arbitrary limitations that are imposed in the offical one, for example my fork allows use of internal kubernetes annotations, whereas this official provider does not.
My fork has been in daily use managing our production infrastructure for the past 6 months.

@bassrock

This comment has been minimized.

Show comment
Hide comment
@bassrock

bassrock Feb 28, 2018

@sl1pm4t nice!! Yea I started using it on my production environment last week. I was working on a switch to Google Cloud and kubernetes from AWS ECS and wanted to keep my stuff in terraform. I needed the deployments feature and ingress features. Thanks for your hardwork!!

bassrock commented Feb 28, 2018

@sl1pm4t nice!! Yea I started using it on my production environment last week. I was working on a switch to Google Cloud and kubernetes from AWS ECS and wanted to keep my stuff in terraform. I needed the deployments feature and ingress features. Thanks for your hardwork!!

@EdoBarroso

This comment has been minimized.

Show comment
Hide comment
@EdoBarroso

EdoBarroso Mar 6, 2018

Nice work @sl1pm4t!
Hope this provider can get merged soon into the official one, as everybody is now working with deployments instead of pods/replicationControllers

EdoBarroso commented Mar 6, 2018

Nice work @sl1pm4t!
Hope this provider can get merged soon into the official one, as everybody is now working with deployments instead of pods/replicationControllers

@mikhail-yarosh

This comment has been minimized.

Show comment
Hide comment
@mikhail-yarosh

mikhail-yarosh Mar 15, 2018

Absolutely. Will be a very useful feature.

mikhail-yarosh commented Mar 15, 2018

Absolutely. Will be a very useful feature.

@synhershko

This comment has been minimized.

Show comment
Hide comment
@synhershko

synhershko Mar 17, 2018

Thanks @sl1pm4t , I will try this out this week as well!

@radeksimko would be nice to hear HashiCorp's idea of keeping this official provider alive and up to speed with Kubernetes' API

synhershko commented Mar 17, 2018

Thanks @sl1pm4t , I will try this out this week as well!

@radeksimko would be nice to hear HashiCorp's idea of keeping this official provider alive and up to speed with Kubernetes' API

@rcrogers

This comment has been minimized.

Show comment
Hide comment
@rcrogers

rcrogers commented Apr 17, 2018

For anyone else who's looking, @sl1pm4t's fork also has an example of how to use the new resources:
https://github.com/sl1pm4t/terraform-provider-kubernetes/blob/e8fc10cd13c6bae1dfe1ecd87d785973b242985d/_examples/ingress/main.tf

@trthomps

This comment has been minimized.

Show comment
Hide comment
@trthomps

trthomps Apr 23, 2018

At this point I can only assume the reason beta/alpha features are not being added is because HashiCorp doesn't like that Kubernetes competes with Console/Nomad and is purposely gimping the product. The Google provider adds beta features within days of release and has no such rule since having a rule like this with Google products is absurd as they are notorious for leaving things in "beta" long after people are using said product/feature in production (GMail anyone?).

trthomps commented Apr 23, 2018

At this point I can only assume the reason beta/alpha features are not being added is because HashiCorp doesn't like that Kubernetes competes with Console/Nomad and is purposely gimping the product. The Google provider adds beta features within days of release and has no such rule since having a rule like this with Google products is absurd as they are notorious for leaving things in "beta" long after people are using said product/feature in production (GMail anyone?).

@borsboom

This comment has been minimized.

Show comment
Hide comment
@borsboom

borsboom Apr 23, 2018

Deployments aren't even beta anymore. They're now in the apps/v1 API version.

borsboom commented Apr 23, 2018

Deployments aren't even beta anymore. They're now in the apps/v1 API version.

@eversC

This comment has been minimized.

Show comment
Hide comment
@eversC

eversC May 1, 2018

@radeksimko will the official provider be updated soon given the k8s Deployment resource is now out of beta?

eversC commented May 1, 2018

@radeksimko will the official provider be updated soon given the k8s Deployment resource is now out of beta?

@iorlas

This comment has been minimized.

Show comment
Hide comment
@iorlas

iorlas May 3, 2018

We are moving our whole infrastructure to K8s now, using terraform. It is a shame we should stop it or use workarounds. Are deployments and services on a plan at least? We gonna manage K8s with different software, I think, but we don't really want to.

iorlas commented May 3, 2018

We are moving our whole infrastructure to K8s now, using terraform. It is a shame we should stop it or use workarounds. Are deployments and services on a plan at least? We gonna manage K8s with different software, I think, but we don't really want to.

@stigok

This comment has been minimized.

Show comment
Hide comment
@stigok

stigok May 3, 2018

This ship isn't moving. Thankfully I'm having success with sl1pm4t's fork: https://github.com/sl1pm4t/terraform-provider-kubernetes

stigok commented May 3, 2018

This ship isn't moving. Thankfully I'm having success with sl1pm4t's fork: https://github.com/sl1pm4t/terraform-provider-kubernetes

@stefanthorpe

This comment has been minimized.

Show comment
Hide comment
@stefanthorpe

stefanthorpe May 29, 2018

@radeksimko Just under a year ago you mentioned that you would revisit this once this is out of beta. We'll it is and there are many people waiting for it.
Could we get some kind of official response on this topic?

stefanthorpe commented May 29, 2018

@radeksimko Just under a year ago you mentioned that you would revisit this once this is out of beta. We'll it is and there are many people waiting for it.
Could we get some kind of official response on this topic?

@ktham

This comment has been minimized.

Show comment
Hide comment
@ktham

ktham Jul 2, 2018

@radeksimko our team is looking to leverage Terraform for Kubernetes, are you/team planning to maintain the Kubernetes provider?

ktham commented Jul 2, 2018

@radeksimko our team is looking to leverage Terraform for Kubernetes, are you/team planning to maintain the Kubernetes provider?

@hafizullah

This comment has been minimized.

Show comment
Hide comment
@hafizullah

hafizullah Jul 17, 2018

I badly need this feature otherwise I would have to look for alternative solutions.. :(

hafizullah commented Jul 17, 2018

I badly need this feature otherwise I would have to look for alternative solutions.. :(

@tsadoklevi

This comment has been minimized.

Show comment
Hide comment
@tsadoklevi

tsadoklevi Jul 20, 2018

(EDIT by HashiCorp: we've edited some of the wording below which we felt was not in accordance with our community code of conduct. While the words have been edited, the meaning of the response we intend to keep unchanged.)

HasiCorp, you are probably well aware of this issue. It seems to me that you don't care. The k8s community is already using Deployments, Ingress etc. and it seems that despite a lot of talk there is no progress on this issue.

Terraform is great but you are making people mistrust you and hence mistrust the "back" of Terraform.

Please announce your policy regarding k8s provider: are you going to fully support it or just let it die slowly?

tsadoklevi commented Jul 20, 2018

(EDIT by HashiCorp: we've edited some of the wording below which we felt was not in accordance with our community code of conduct. While the words have been edited, the meaning of the response we intend to keep unchanged.)

HasiCorp, you are probably well aware of this issue. It seems to me that you don't care. The k8s community is already using Deployments, Ingress etc. and it seems that despite a lot of talk there is no progress on this issue.

Terraform is great but you are making people mistrust you and hence mistrust the "back" of Terraform.

Please announce your policy regarding k8s provider: are you going to fully support it or just let it die slowly?

@zimbatm

This comment has been minimized.

Show comment
Hide comment
@zimbatm

zimbatm Jul 20, 2018

@tsadoklevi there is no need to be rude

That being said, why not accept more maintainers to the repo? There are some active contributors like @sl1pm4t that could help. My impression was that splitting the providers out of the terraform repo was exactly to allow to delegate control more easily. Maybe it's time to take advantage of this.

zimbatm commented Jul 20, 2018

@tsadoklevi there is no need to be rude

That being said, why not accept more maintainers to the repo? There are some active contributors like @sl1pm4t that could help. My impression was that splitting the providers out of the terraform repo was exactly to allow to delegate control more easily. Maybe it's time to take advantage of this.

@zimbatm

This comment has been minimized.

Show comment
Hide comment

zimbatm commented Jul 20, 2018

@paultyng

This comment has been minimized.

Show comment
Hide comment
@paultyng

paultyng Jul 23, 2018

Contributor

@tsadoklevi Your frustrations are well warranted and understood. I promise we're working hard to improve the Kubernetes provider and will outline exactly what we're doing in this post.

Before responding to your concerns: while your point is fair, your tone is not. Whether it is directed at us as a company or any other member of the community, we expect kind discourse. We accept criticism and are happy to respond, but criticism can be delivered constructively without expletives and what may feel like attacks. Because you do raise a fair point, we've filtered your comment and noted that we filtered it (we would never do so secretly). Thank you for raising your concerns.

The Kubernetes provider has been probably the single major point of focus/discussion (non-technically) over the past month. There is a lot of pressure both internally at HashiCorp and externally to improve this quickly. We've already created an improvement plan and roadmap to do so, and are currently looking for developers to work with us to enable it: #178

I want to be absolutely clear that we are disappointed and sorry to the community for the state of this provider. It is important to us, and if we could break down the hours spent over the past couple months you'd see its been something we've spent a disproportionately high amount of time working on. We aren't lying or being deceptive: we care about K8S, we care about this provider, and we want to improve it as quickly as possible.

We're always open to bringing on open source core commmitters (and those committers outnumber full time provider engineers at HashiCorp by more than 10x). There is a challenge here that OSS committers are usually working in their free time and it'd be unfair of us to expect any more. So for a healthy committer environment, they must be supported by full time staff. We're more than happy to merge pull requests, but please understand that hitting the merge button is the easiest thing we can do; the multi-year maintenance with bugs, customer support (paying), feature requests etc. that come with it are the real cost of hitting "merge," and the original PR submitter usually doesn't stick around. Still, we're happy to do that, as long as we have the confidence we can support it. And currently, we need to hire a full time engineer to help us here.

There are a number of forks of this provider and we'd love to work with those owners to bring them in. A lot of the fork owners want this, too. We've reached out to a few of the maintainers (as well as contributors) and asked if they'd be interested in working with HashiCorp on this full time. We got good responses, but due to a number of legal difficulties (see: https://news.ycombinator.com/item?id=17022563) we're blocked. We're at the point though where we're looking to contract these individuals in the interim.

I think we were more optimistic going into this (started a few months ago) that we'd find an FTE quite quickly. That hasn't turned out to be the case and we probably should've engaged community efforts first and pushed some of our own team to substitute for a bit. The latter is easier said than done, since they're all working full time on equally important providers with deep roadmaps.

Note that the Terraform community has been through this pain before. Take the Azure provider as an example. It languished and barely worked 18 months ago. We had similarly upset community and customers and our reasoning was much of the same as the above. We simultaneously engaged Microsoft who have officially partnered with us, brought on core committers, and hired a FTE and very quickly it has become one of our best providers that is being updated frequently, has broad feature coverage, etc. We're filling the same holes now with this provider, but its not easy.

That is the full picture of what's going on. I hope you can understand the situation that we're in.

That was a lot of talk, so what's the action?

  • We're hiring a FTE to help us with this provider: #178
  • We're talking to downstream fork maintainers and contributors about helping us (paid).
  • We've interviewed a number of type of users and formed a clear draft of what we'd like to achieve with this provider in the short term. Note: where "short term" really starts when we have the help to enable it.
  • We are actively looking for community help. This is more recent, we'll support these members in the short term by straining a bit of our team internally that isn't focused on K8S.
  • We will review PRs that come in naturally, but understand that there isn't a dedicated person looking at these currently. Still, in the interim we are looking for ways to allocate time for our other engineers with K8S experience to help out.

We'll try to do better to keep this community up to date via issues and so on going forward.

Contributor

paultyng commented Jul 23, 2018

@tsadoklevi Your frustrations are well warranted and understood. I promise we're working hard to improve the Kubernetes provider and will outline exactly what we're doing in this post.

Before responding to your concerns: while your point is fair, your tone is not. Whether it is directed at us as a company or any other member of the community, we expect kind discourse. We accept criticism and are happy to respond, but criticism can be delivered constructively without expletives and what may feel like attacks. Because you do raise a fair point, we've filtered your comment and noted that we filtered it (we would never do so secretly). Thank you for raising your concerns.

The Kubernetes provider has been probably the single major point of focus/discussion (non-technically) over the past month. There is a lot of pressure both internally at HashiCorp and externally to improve this quickly. We've already created an improvement plan and roadmap to do so, and are currently looking for developers to work with us to enable it: #178

I want to be absolutely clear that we are disappointed and sorry to the community for the state of this provider. It is important to us, and if we could break down the hours spent over the past couple months you'd see its been something we've spent a disproportionately high amount of time working on. We aren't lying or being deceptive: we care about K8S, we care about this provider, and we want to improve it as quickly as possible.

We're always open to bringing on open source core commmitters (and those committers outnumber full time provider engineers at HashiCorp by more than 10x). There is a challenge here that OSS committers are usually working in their free time and it'd be unfair of us to expect any more. So for a healthy committer environment, they must be supported by full time staff. We're more than happy to merge pull requests, but please understand that hitting the merge button is the easiest thing we can do; the multi-year maintenance with bugs, customer support (paying), feature requests etc. that come with it are the real cost of hitting "merge," and the original PR submitter usually doesn't stick around. Still, we're happy to do that, as long as we have the confidence we can support it. And currently, we need to hire a full time engineer to help us here.

There are a number of forks of this provider and we'd love to work with those owners to bring them in. A lot of the fork owners want this, too. We've reached out to a few of the maintainers (as well as contributors) and asked if they'd be interested in working with HashiCorp on this full time. We got good responses, but due to a number of legal difficulties (see: https://news.ycombinator.com/item?id=17022563) we're blocked. We're at the point though where we're looking to contract these individuals in the interim.

I think we were more optimistic going into this (started a few months ago) that we'd find an FTE quite quickly. That hasn't turned out to be the case and we probably should've engaged community efforts first and pushed some of our own team to substitute for a bit. The latter is easier said than done, since they're all working full time on equally important providers with deep roadmaps.

Note that the Terraform community has been through this pain before. Take the Azure provider as an example. It languished and barely worked 18 months ago. We had similarly upset community and customers and our reasoning was much of the same as the above. We simultaneously engaged Microsoft who have officially partnered with us, brought on core committers, and hired a FTE and very quickly it has become one of our best providers that is being updated frequently, has broad feature coverage, etc. We're filling the same holes now with this provider, but its not easy.

That is the full picture of what's going on. I hope you can understand the situation that we're in.

That was a lot of talk, so what's the action?

  • We're hiring a FTE to help us with this provider: #178
  • We're talking to downstream fork maintainers and contributors about helping us (paid).
  • We've interviewed a number of type of users and formed a clear draft of what we'd like to achieve with this provider in the short term. Note: where "short term" really starts when we have the help to enable it.
  • We are actively looking for community help. This is more recent, we'll support these members in the short term by straining a bit of our team internally that isn't focused on K8S.
  • We will review PRs that come in naturally, but understand that there isn't a dedicated person looking at these currently. Still, in the interim we are looking for ways to allocate time for our other engineers with K8S experience to help out.

We'll try to do better to keep this community up to date via issues and so on going forward.

@Miyurz

This comment has been minimized.

Show comment
Hide comment
@Miyurz

Miyurz Jul 25, 2018

@paultyng Thank you for making the community aware about the progress. Yes, we love terraform and hence want to see the terraform providers for deployment and other k8s resources. I understand the delay as its hard to catch up with the aggressive k8s releases.

Is there any workaround that you or anyone else could suggest(local-exec etc.,) so that I continue to use tf and replace with the provider once its available ?

Miyurz commented Jul 25, 2018

@paultyng Thank you for making the community aware about the progress. Yes, we love terraform and hence want to see the terraform providers for deployment and other k8s resources. I understand the delay as its hard to catch up with the aggressive k8s releases.

Is there any workaround that you or anyone else could suggest(local-exec etc.,) so that I continue to use tf and replace with the provider once its available ?

@Phylu

This comment has been minimized.

Show comment
Hide comment
@Phylu

Phylu Jul 25, 2018

@Miyurz
My current workaround for running deployments looks like this:

provisioner "local-exec" {
    command = "echo '${data.template_file.deployment.rendered}' > /tmp/deployment.yaml && kubectl apply --kubeconfig=$HOME/.kube/config -f /tmp/deployment.yaml"
  }

Whereas I have a template yaml file which contains the deployment description and is filled with variables depending on the terraform code

data "template_file" "deployment" {
  template = "${file("${path.module}/deployment.yaml")}"

  vars {
    NAMESPACE                     = "${var.namespace}"
    DB_HOST                       = "${var.db_host}"
    DB_PORT                       = "${var.db_port}"
  }
}

Phylu commented Jul 25, 2018

@Miyurz
My current workaround for running deployments looks like this:

provisioner "local-exec" {
    command = "echo '${data.template_file.deployment.rendered}' > /tmp/deployment.yaml && kubectl apply --kubeconfig=$HOME/.kube/config -f /tmp/deployment.yaml"
  }

Whereas I have a template yaml file which contains the deployment description and is filled with variables depending on the terraform code

data "template_file" "deployment" {
  template = "${file("${path.module}/deployment.yaml")}"

  vars {
    NAMESPACE                     = "${var.namespace}"
    DB_HOST                       = "${var.db_host}"
    DB_PORT                       = "${var.db_port}"
  }
}
@paultyng

This comment has been minimized.

Show comment
Hide comment
@paultyng

paultyng Jul 26, 2018

Contributor

I have done something similar, essentially having a kubectl provisioner that ran my templated YAML files, only difference being it was remote exec in the cluster to deal with authentication.

Contributor

paultyng commented Jul 26, 2018

I have done something similar, essentially having a kubectl provisioner that ran my templated YAML files, only difference being it was remote exec in the cluster to deal with authentication.

@borsboom

This comment has been minimized.

Show comment
Hide comment
@borsboom

borsboom Jul 26, 2018

@paultyng If you're still looking for resources, is this something you'd consider hiring an outside contractor to work on and maintain? The company I work for uses both Terraform and Kubernetes heavily, and we've considered jumping into this implementation but have been reluctant due to the amount of future maintenance likely required (we have to choose our battles, and we don't like to just throw new code over the fence and then expect others to maintain it).

We'd certainly much rather be using TF than Helm, but Helm is "good enough" that the itch hasn't been quite strong enough to decide to take on scratching it "for free." But we'd sure be open to some kind of partnership to help this get done and maintained in the future.

borsboom commented Jul 26, 2018

@paultyng If you're still looking for resources, is this something you'd consider hiring an outside contractor to work on and maintain? The company I work for uses both Terraform and Kubernetes heavily, and we've considered jumping into this implementation but have been reluctant due to the amount of future maintenance likely required (we have to choose our battles, and we don't like to just throw new code over the fence and then expect others to maintain it).

We'd certainly much rather be using TF than Helm, but Helm is "good enough" that the itch hasn't been quite strong enough to decide to take on scratching it "for free." But we'd sure be open to some kind of partnership to help this get done and maintained in the future.

@paultyng

This comment has been minimized.

Show comment
Hide comment
@paultyng

paultyng Jul 27, 2018

Contributor

@borsboom our long term goal is to have a full time employee or more supporting this, but in the near term we would consider contracting to help out the community and keep it moving, if you are still interested though feel free to email me (ptyng@hashicorp.com)

Contributor

paultyng commented Jul 27, 2018

@borsboom our long term goal is to have a full time employee or more supporting this, but in the near term we would consider contracting to help out the community and keep it moving, if you are still interested though feel free to email me (ptyng@hashicorp.com)

@NickLarsenNZ

This comment has been minimized.

Show comment
Hide comment
@NickLarsenNZ

NickLarsenNZ Sep 22, 2018

Any updates on this? It seems to be dragging along too slowly and as a result the provider is way behind.
The local-exec fallback of course works, but then the state is not maintained.

NickLarsenNZ commented Sep 22, 2018

Any updates on this? It seems to be dragging along too slowly and as a result the provider is way behind.
The local-exec fallback of course works, but then the state is not maintained.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment