Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why has destination/credential datasources been removed? #16

Closed
log1cb0mb opened this issue Dec 15, 2022 · 10 comments
Closed

Why has destination/credential datasources been removed? #16

log1cb0mb opened this issue Dec 15, 2022 · 10 comments

Comments

@log1cb0mb
Copy link

Hey,

Any explanation as to why has this been removed:
d524c51

silently as I cannot seem to find any reference of this in release notes?

What am i missing? What is the alternative to this i.e how to use providers such as helm/kubernetes with Infra?

@log1cb0mb log1cb0mb changed the title Why has remove destination/credential datasources been removed? Why has destination/credential datasources been removed? Dec 15, 2022
@mxyng
Copy link
Collaborator

mxyng commented Dec 15, 2022

Hi @log1cb0mb, that's a great question. For the first versions of the Infra provider, we're trying to find what works in the context of Terraform and what doesn't.

The destination/credential data sources were removed because we feel it didn't fit into what our provider should be, at least in its current incarnation. The main issue with using Infra to provide access to Kubernetes through the Terraform Kubernetes provider is one of sequencing. In order to access Kubernetes through Infra, an Infra connector must first be installed.

If this is done through Terraform, the user would already have some form of creating Kubernetes credentials, such as aws_eks_cluster and aws_eks_cluster_auth. Therefore, Infra's data sources are only useful as a secondary source of access.

This is not to say either data sources are gone forever. If there's a strong use case and sufficient interest, we'd be happy to reassess their inclusion.

PS: Apologizes for not mentioning this in the release notes. It was a mistake and not intentional. I've amended the v0.2.0 notes.

@log1cb0mb
Copy link
Author

log1cb0mb commented Dec 15, 2022

@mxyng Thank you for providing details.

I do recognise the issue with sequencing and the infra-connector dependency, for which I had sort of put in place a workaround just last week, essentially separate TF codebase where it depends on a kube cluster to be up and infra-connector installed first so that infra server has target server information.

Now, I am not sure I completely follow your point with infra being the secondary source of access and cluster credentials. As per my understanding with Kubernetes connector, the main or probably the only use case is self-managed kubernetes and not any managed/public solution as those provide their own auth and access management out of the box.

With self-managed kubernetes, Infra becomes a powerful and a critical addon being for most part the only authentication and access management solution unless someone wants to manage all the messy kubernetes RBAC on their own.

That is exactly ^ what my use case is so kubernetes cluster access only through Infra, no additional/manual credentials creation for end users or distributing kubeconfigs etc etc.

With that said, if Infra is the only and best possible method for access management especially for end application/user teams (they dont have default kube admin config for obvious reasons), it is expected that teams will need for e.g Helm or kubernetes TF providers to deploy and manage their applications assuming they must use TF.

@mxyng
Copy link
Collaborator

mxyng commented Dec 15, 2022

Thanks for sharing! This is great feedback.

To clarify on "secondary source of access", I meant in terms of Terraform. As you mentioned, you need two separate Terraform plans, an initial plan to create the cluster and deploy the connector, and a second plan to access the cluster using Infra. It's in this second plan where Infra destination and credential has an impact.

The first plan requires some form of access to install the connector. This access can usually be reused for subsequent Kubernetes/Helm providers. This reduces the usefulness of Infra as a Terraform access provider.

@log1cb0mb
Copy link
Author

log1cb0mb commented Dec 15, 2022

Yes, I understand that process essentially when a kubernetes cluster is initially being brought up. That chicken and egg problem, I get that.

Now, Lets consider a scenario which I meantioned in my comment where operations team has brought up a cluster and took care of deploying the infra-connector however that is handled.

Once that is done they need to handover cluster access to applications/customer teams and as i mentioned if infra is the only or primary solution for access (because why not as it simplifies the RBAC management and also the very reason for operations team to use Infra in the place), there are high chances that teams need to use TF for their application deployment. At that point, operations team has already taken care of infra-connector etc.

Is that not a valid use case for this feature with offcourse an exception or a hard reauirement that user cannot use infra TF provider for this until there is infra-connector deployed for target cluster?

I should mention that in this scenario obviously application teams' tf codebase is completely separated from operations team's codebase that brought up a cluster, I mean its completely independent of each other.
In other words operations team providing kubernetes platform as a service.

@mxyng
Copy link
Collaborator

mxyng commented Dec 15, 2022

It's definitely a valid use case. We just wasn't sure how common this is in the wild.

Let me speak to the team and get back to you.

@mxyng
Copy link
Collaborator

mxyng commented Dec 15, 2022

After some discussion, we've decided to hold off adding this back. The main reason is the API used for creating the credential needs some work to work well in a use case like Terraform.

Specifically, the created credential has a lifetime of 5 minutes. While this is sufficient for a use case like kubectl where each request may ask for a new set of credentials, it doesn't really translate to Terraform. Any sufficiently large Terraform plan may error partway through due to an expired credential.

As we improve Infra as a service, updates will be made to supporting projects like this Terraform provider once we feel the quality of the results are up to our standards.

Now if you wish to keep using Infra to access your cluster through Terraform, you can keep using version 0.1.2 though I would suggest only using that version for this use case.

@log1cb0mb
Copy link
Author

That sounds good.

With 0.1.2, it should not effect the infra server versioning I assume? will it restrict infra server version or the feature in question and related resources stop working if infra server is running newer version?

@mxyng
Copy link
Collaborator

mxyng commented Dec 15, 2022

It does not affect the server version. The Terraform provider specifies an API version which the server will automatically migrate when necessary if there are differences

@log1cb0mb
Copy link
Author

Perfect! 👍🏻

@mxyng
Copy link
Collaborator

mxyng commented Dec 16, 2022

Closing in favour of #17 which will be used for tracking

@mxyng mxyng closed this as completed Dec 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants