Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core.v1.Secret.metadata.name should not be a secret output #1464

Closed
Gerrit-K opened this issue Feb 5, 2021 · 2 comments
Closed

core.v1.Secret.metadata.name should not be a secret output #1464

Gerrit-K opened this issue Feb 5, 2021 · 2 comments
Assignees
Labels
kind/bug Some behavior is incorrect or out of spec last-applied-configuration Issues related to the last-applied-configuration annotation resolution/fixed This issue was fixed

Comments

@Gerrit-K
Copy link

Gerrit-K commented Feb 5, 2021

Problem description

When creating a core.v1.Secret, you can access its (auto-generated) name via secret.metadata.name. Unfortunately though if one of the inputs to the resource is marked as being secret, the whole v1.Secret is marked as being secret, including its metadata output and thus also its name.

While I don't know the reason why the "is-secret" attribute leaks from the data or stringData input to the metadata output, I can give a very simple example where this is really unpleasant: mounting secrets in a pod's environment. You basically have two choices:

  1. You connect secretKeyRef.name to secret.metadata.name which will lead to the whole container being marked as secret. This prevents you from previewing important diffs, e.g. to the container environment, volume mounts, etc.
  2. You specifically set secret.metadata.name to a hardcoded value and use that same value for secretKeyRef.name. This should eliminate the "is-secret" status from the output, but then (1) you don't have auto-naming from pulumi and (2) the dependency between the secret and the pod is lost, which can potentially cause issues on updates.

IMO secret.metadata.name should not be treated as a secret output in that case, but as I said, I don't know if there might be a good reason for that choice. My current assumption is that this has something to do with the last-applied-configuration annotation (which itself seems to cause some controversy in several currently open issues here), but I'm not sure.

I'd be thankful if there is some kind of workaround for this to enable us to diff the container updates correctly again.

Reproducing the issue

This code should illustrate the issue. If you switch the commented lines around secretValue you should be able to see how the exported secretName changes between being printed as a regular output and a secret output.

import * as pulumi from "@pulumi/pulumi";
import * as k8s from "@pulumi/kubernetes";

const name = 'secret-test';
const namespace = new k8s.core.v1.Namespace(name, {
    metadata:{name}
});

const config = new pulumi.Config();

// if this is used, secret.metadata.name will be a regular output
const secretValue = 'foo';
// if this is used, secret.metadata.name will be secret
// const secretValue = config.requireSecret('foo');

const key = 'foo';
const secret = new k8s.core.v1.Secret(name, {
    metadata:{namespace:namespace.metadata.name},
    stringData: {
        [key]: secretValue
    }
});
export const secretName = secret.metadata.name;

Funnily, when switching from non-secret to secret the preview before the prompt in pulumi up even shows the actual secret name, but if you select yes and let it update the secret, the final output will be [secret]:

image

@lblackstone lblackstone self-assigned this Feb 5, 2021
@lblackstone
Copy link
Member

Yeah, this looks like a side effect of the last-applied-configuration annotation. I can't recall all of the details, but I believe that if any value in a map is secret, the entire map is marked secret. What I suspect is happening is that the secret value appears in the annotation after running an update, and then taints the entire metadata map.

Unfortunately, I'm not sure what to suggest for a good workaround at this time. It seems like we may need to rethink our use of the last-applied-configuration annotation, which is currently used for client-side diffing and for improving integration with kubectl.

@lblackstone
Copy link
Member

I wasn't able to repro with the v4 provider, so looks like it was fixed by #2445

@lblackstone lblackstone added the last-applied-configuration Issues related to the last-applied-configuration annotation label Jul 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Some behavior is incorrect or out of spec last-applied-configuration Issues related to the last-applied-configuration annotation resolution/fixed This issue was fixed
Projects
None yet
Development

No branches or pull requests

2 participants