Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Get all k/v pairs from endpoint #529

Open
dirtycajunrice opened this issue May 6, 2021 · 34 comments
Open

Get all k/v pairs from endpoint #529

dirtycajunrice opened this issue May 6, 2021 · 34 comments
Assignees
Labels
feature/sync kind/feature Categorizes issue or PR as related to a new feature.

Comments

@dirtycajunrice
Copy link

Motivation
Some applications require a significant amount of configuration that is sensitive. This becomes extremely tedious and adds redundancy and toil where it could be reduced using the same functionality that envFrom uses in kubernetes core as well as established solutions like external-secrets

Describe the solution you'd like
2 Separate requests.

  1. Make secretKey optional. If the user has no intention of renaming the secret key, nor specifying a GET request solely for that key, it is redundant declarations that add up.
  2. Allow all keys to be ingested from an endpoint.

A practical example with only 5 k/v pairs currently looks like:

apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: my-secret
  namespace: default
spec:
  provider: vault
  parameters:
    roleName: "csi-secrets-store"
    vaultAddress: https://vault.company.tld
    vaultKubernetesMountPath: kubernetes/eks-use1-sre-prod
    objects: |
      - objectName: PG_DB_PASSWORD
        secretKey: PG_DB_PASSWORD
        secretPath: kv-v2/data/my-app
      - objectName: APP_TOKEN
        secretKey: APP_TOKEN
        secretPath: kv-v2/data/my-app
      - objectName: OAUTH_CLIENT_ID
        secretKey: OAUTH_CLIENT_ID
        secretPath: kv-v2/data/my-app
      - objectName: OAUTH_SECRET
        secretKey: OAUTH_SECRET
        secretPath: kv-v2/data/my-app
      - objectName: SMTP_PASSWORD
        secretKey: SMTP_PASSWORD
        secretPath: kv-v2/data/my-app
  secretObjects:
    - type: Opaque
      secretName: my-app
      data:
        - key: PG_DB_PASSWORD
          objectName: PG_DB_PASSWORD
        - key: APP_TOKEN
          objectName: APP_TOKEN
        - key: OAUTH_CLIENT_ID
          objectName: OAUTH_CLIENT_ID
        - key: OAUTH_SECRET
          objectName: OAUTH_SECRET
        - key: SMTP_PASSWORD
          objectName: SMTP_PASSWORD

When all you should really need is

apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: my-secret
  namespace: default
spec:
  provider: vault
  parameters:
    roleName: "csi-secrets-store"
    vaultAddress: https://vault.company.tld
    vaultKubernetesMountPath: kubernetes/eks-use1-sre-prod
    objectListFrom: 
      - name: my-app-keys
        secretPath: kv-v2/data/my-app
  secretObjects:
    - type: Opaque
      secretName: my-app
      dataFrom:
        - objectList: my-app-keys
@dirtycajunrice dirtycajunrice added the kind/feature Categorizes issue or PR as related to a new feature. label May 6, 2021
@tam7t
Copy link
Contributor

tam7t commented May 11, 2021

Everything within parameters on a SecretProviderClass is opaque to the driver and interpreted by the individual providers. I think it may be good for providers to evaluate the feasibility of this with their respective APIs. I think I see similar issues open for AWS and Azure at this point:

If there are commonalities between the solutions then we can investigate feasibility of a shared feature of the driver itself.

@dirtycajunrice
Copy link
Author

dirtycajunrice commented May 12, 2021

@tam7t I completely understand that perspective. the secretObjects are part of the driver itself though correct? If an option is not available from the driver like "accept an array/list as an object", passed along from the providers, then they will not implement it as it would "go nowhere" right? Thoughts?

@tam7t
Copy link
Contributor

tam7t commented May 12, 2021

Ah yes, secretObjects is defined by the driver and used for the K8s sync feature to map relative file paths to secret key/values - so changes to allow mapping multiple file paths to keys would require driver changes.

@manedurphy
Copy link
Contributor

/assign

@ritazh
Copy link
Member

ritazh commented Jun 10, 2021

+1

Looking forward to the detailed design doc so we can discuss some of the nuances this feature may bring.

Specifically, today with the secretObjects as a static list in SPC CR, this controller loops thru secretsObjects and creates secrets with the content in the mounted volume retrieved by the provider from the external secrets stores. There is an assumption that the object (file) exists in the mount, otherwise it will return an error and the reconciler will retry. So if that assumption is no longer valid, how does that controller know when a file doesn't exist is intentional or it should retry the reconcile loop.

With this feature request/proposal, the list of objects is maintained outside of the SPC CR via objectListfrom which makes it harder for the provider to validate if the right object actually exists in the external source or if it has the right permission to access that object. As a result, it makes error handling harder as the mount would succeed and application may fail silently. I think there are things we can add to make sure this is addressed. Let's discuss this more in the design doc.

@jacobbeasley
Copy link

Perhaps not directly related, but I wonder if inspiration could be drawn from the Vault Injector Sidecar's options, such as in the use of template files to extract and format a vault secret into a format that most applications can use natively (such as a .env file or a bash script with exported env vars).

https://www.vaultproject.io/docs/platform/k8s/injector#secret-templates

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 21, 2021
@aramase
Copy link
Member

aramase commented Jan 3, 2022

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 3, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 3, 2022
@aramase
Copy link
Member

aramase commented Apr 4, 2022

/remove-lifecycle stale

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 3, 2022
@nilekhc
Copy link
Contributor

nilekhc commented Jul 5, 2022

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 5, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 3, 2022
@aramase
Copy link
Member

aramase commented Oct 3, 2022

/remove-lifecycle stale

@agates4
Copy link

agates4 commented Jan 3, 2023

This feature would reduce security concerns massively and will be widely celebrated!

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 3, 2023
@agates4
Copy link

agates4 commented Apr 3, 2023

This really shouldn’t be a stale issue.

@simonmarty
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 10, 2023
@simonmarty
Copy link
Contributor

We're seeing strong traction for this request in the AWS provider (issue)

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 9, 2023
@agates4
Copy link

agates4 commented Jul 9, 2023

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 9, 2023
@msitworld
Copy link

Hi, is there any update regarding this suggested feature?

@odarriba
Copy link

odarriba commented Oct 5, 2023

Is there any update on this issue? It would really help for deployments made using reutilizable code to create common K8s entities (namespaces, secrets stores, etc).

The workaround is currently a bit hacky (at least using AWS driver)

@zarcen
Copy link

zarcen commented Nov 10, 2023

Would like to bump this issue as it would be a really great feature to have

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 8, 2024
@pierluigilenoci
Copy link
Contributor

/remove-lifecycle stale

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 12, 2024
@pierluigilenoci
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 13, 2024
@sbaugh-rh
Copy link

Bump. This would be epic.

@allanian
Copy link

allanian commented Aug 2, 2024

+1

@agates4
Copy link

agates4 commented Aug 3, 2024

+10

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature/sync kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests