Skip to content

[kots] Allow for users to provide references to external secrets or service accounts #13452

@mrzarquon

Description

@mrzarquon

Is your feature request related to a problem? Please describe

Currently the KOTS install insists that secrets that access external services be entered into the configuration. This causes problems because KOTS then stores those secrets as static values in the config files.

For any service or solution where the secrets are rotated outside of KOTS, when a user goes to perform a redeploy, their secrets will then be updated with stale credentials from whatever was saved in the gitpod configuration in kots.

This manifests most immediately in setting up ECR as a registry for Gitpod - where today someone can configure a batch job to refresh the secret on a periodic basis outside of gitpod itself. If gitpod is redeployed, access to ECR will be broken until the periodic job runs, refreshing the secret.

Describe the behaviour you'd like

In fields where a username/password or certificate is requested for from the user, one should be able to select "use existing kubernetes secret" where they can provide the name of a secret to be used.

Same for service-account's, pending us supporting IAM like passwordless / realtime looking up of credentials - this is achieved with a clusterrolebinding to a k8s service account that then exposes credentials for access to those services via additional secrets mount into that services pod.

For certificates this also means a user can now perform their certificate generation / letsencrypt verification (or other third part CA that works with cert-manager, like Verafi) out of band of the installation and then provide the finished certificate to Gitpod. This cert will still be updated via cert-manager for them, and they don't have to worry about an expired ssl certificate embedded in their config file.

Describe alternatives you've considered

Most of them involve "deploy gitpod in a technically functional way" and patch components either before the installation fails or if it succeeds, patch things and then never hit redeploy in kots until they are ready to perform that patching / extra additional steps.

Additional context

Right now we're fighting our own installer because we're not exposing open ended configuration options and ways for us to pass extra data to underlying components.

Cloud best practices involve not storing secrets in plaintext and making it possible to be rotated, using rolebindings to service accounts, and other utilities to rotate secrets for us, are meant to help with those problems. Hard coding values in our configuration file just so our installer doesn't have to directly support k8s API calls (while we shell out to kubectl to perform these actions) will continue to make more work for us and make things more difficult to deploy in situations where gitpod has to confirm to internal security requirements or it will be removed.

Internal conversation

Metadata

Metadata

Assignees

No one assigned

    Labels

    meta: staleThis issue/PR is stale and will be closed soonself-hosted

    Type

    No type

    Projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions