Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pulumi Up behavior differs between machines #11246

Closed
zeljkobekcic opened this issue Nov 3, 2022 · 2 comments
Closed

Pulumi Up behavior differs between machines #11246

zeljkobekcic opened this issue Nov 3, 2022 · 2 comments
Labels
kind/bug Some behavior is incorrect or out of spec needs-triage Needs attention from the triage team resolution/no-repro This issue wasn't able to be reproduced

Comments

@zeljkobekcic
Copy link

zeljkobekcic commented Nov 3, 2022

What happened?

In our project we encountered strange behaviour of Pulumi.

One new coworker cloned code from our repository, has all the same versions as us, the same state file (stored in S3 bucket) etc, but Pulumi prompts on pulumi up to delete some resources.

Here is the output of `pulumi preview`
     Type                              Name                   Plan       Info
     pulumi:pulumi:Stack               our-product-stuff
 ~   ├─ kubernetes:helm.sh/v3:Release  linkerd-crds           update     [diff: -resourceNames]
 ~   ├─ kubernetes:helm.sh/v3:Release  linkerd-control-plane  update     [diff: -resourceNames]
 ~   └─ kubernetes:helm.sh/v3:Release  linkerd-multicluster   update     [diff: -resourceNames]

The expanded diff:

Previewing update (prod):
  pulumi:pulumi:Stack: (same)
    [urn=urn:pulumi:prod::infra::pulumi:pulumi:Stack::infra-prod]
    > pulumi:pulumi:StackReference: (read)
        [id=infra]
        [urn=urn:pulumi:prod::infra::pulumi:pulumi:StackReference::infra]
        name: "infra"
    ~ kubernetes:helm.sh/v3:Release: (update)
        [id=linkerd/linkerd-crds]
        [urn=urn:pulumi:prod::infra::kubernetes:helm.sh/v3:Release::linkerd-crds]
        [provider=urn:pulumi:prod::infra::pulumi:providers:kubernetes::default_3_21_4::086e4490-9af1-4939-8bc6-613a867d731d]
      - resourceNames: {
          - CustomResourceDefinition.apiextensions.k8s.io/apiextensions.k8s.io/v1: [
          -     [0]: "authorizationpolicies.policy.linkerd.io"
          -     [1]: "httproutes.policy.linkerd.io"
          -     [2]: "meshtlsauthentications.policy.linkerd.io"
          -     [3]: "networkauthentications.policy.linkerd.io"
          -     [4]: "serverauthorizations.policy.linkerd.io"
          -     [5]: "servers.policy.linkerd.io"
          -     [6]: "serviceprofiles.linkerd.io"
            ]
        }
    ~ kubernetes:helm.sh/v3:Release: (update)
        [id=linkerd/linkerd-control-plane]
        [urn=urn:pulumi:prod::infra::kubernetes:helm.sh/v3:Release::linkerd-control-plane]
        [provider=urn:pulumi:prod::infra::pulumi:providers:kubernetes::default_3_21_4::086e4490-9af1-4939-8bc6-613a867d731d]
      - resourceNames: {
          - ClusterRole.rbac.authorization.k8s.io/rbac.authorization.k8s.io/v1                         : [
          -     [0]: "linkerd-heartbeat"
          -     [1]: "linkerd-linkerd-destination"
          -     [2]: "linkerd-linkerd-identity"
          -     [3]: "linkerd-linkerd-proxy-injector"
          -     [4]: "linkerd-policy"
            ]
          - ClusterRoleBinding.rbac.authorization.k8s.io/rbac.authorization.k8s.io/v1                  : [
          -     [0]: "linkerd-destination-policy"
          -     [1]: "linkerd-heartbeat"
          -     [2]: "linkerd-linkerd-destination"
          -     [3]: "linkerd-linkerd-identity"
          -     [4]: "linkerd-linkerd-proxy-injector"
            ]
          - ConfigMap/v1                                                                               : [
          -     [0]: "linkerd/linkerd-config"
          -     [1]: "linkerd/linkerd-identity-trust-roots"
            ]
          - CronJob.batch/batch/v1                                                                     : [
          -     [0]: "linkerd/linkerd-heartbeat"
            ]
          - Deployment.apps/apps/v1                                                                    : [
          -     [0]: "linkerd/linkerd-destination"
          -     [1]: "linkerd/linkerd-identity"
          -     [2]: "linkerd/linkerd-proxy-injector"
            ]
          - MutatingWebhookConfiguration.admissionregistration.k8s.io/admissionregistration.k8s.io/v1  : [
          -     [0]: "linkerd-proxy-injector-webhook-config"
            ]
          - Role.rbac.authorization.k8s.io/rbac.authorization.k8s.io/v1                                : [
          -     [0]: "linkerd/linkerd-heartbeat"
            ]
          - RoleBinding.rbac.authorization.k8s.io/rbac.authorization.k8s.io/v1                         : [
          -     [0]: "linkerd/linkerd-heartbeat"
            ]
          - Secret/v1                                                                                  : [
          -     [0]: "linkerd/linkerd-identity-issuer"
          -     [1]: "linkerd/linkerd-policy-validator-k8s-tls"
          -     [2]: "linkerd/linkerd-proxy-injector-k8s-tls"
          -     [3]: "linkerd/linkerd-sp-validator-k8s-tls"
            ]
          - Service/v1                                                                                 : [
          -     [0]: "linkerd/linkerd-dst"
          -     [1]: "linkerd/linkerd-dst-headless"
          -     [2]: "linkerd/linkerd-identity"
          -     [3]: "linkerd/linkerd-identity-headless"
          -     [4]: "linkerd/linkerd-policy"
          -     [5]: "linkerd/linkerd-policy-validator"
          -     [6]: "linkerd/linkerd-proxy-injector"
          -     [7]: "linkerd/linkerd-sp-validator"
            ]
          - ServiceAccount/v1                                                                          : [
          -     [0]: "linkerd/linkerd-destination"
          -     [1]: "linkerd/linkerd-heartbeat"
          -     [2]: "linkerd/linkerd-identity"
          -     [3]: "linkerd/linkerd-proxy-injector"
            ]
          - ValidatingWebhookConfiguration.admissionregistration.k8s.io/admissionregistration.k8s.io/v1: [
          -     [0]: "linkerd-policy-validator-webhook-config"
          -     [1]: "linkerd-sp-validator-webhook-config"
            ]
        }
    ~ kubernetes:helm.sh/v3:Release: (update)
        [id=linkerd-multicluster/linkerd-multicluster]
        [urn=urn:pulumi:prod::infra::kubernetes:helm.sh/v3:Release::linkerd-multicluster]
        [provider=urn:pulumi:prod::infra::pulumi:providers:kubernetes::default_3_21_4::086e4490-9af1-4939-8bc6-613a867d731d]
      - resourceNames: {
          - ClusterRole.rbac.authorization.k8s.io/rbac.authorization.k8s.io/v1       : [
          -     [0]: "linkerd-multicluster/linkerd-service-mirror-remote-access-default"
            ]
          - ClusterRoleBinding.rbac.authorization.k8s.io/rbac.authorization.k8s.io/v1: [
          -     [0]: "linkerd-multicluster/linkerd-service-mirror-remote-access-default"
            ]
          - CustomResourceDefinition.apiextensions.k8s.io/apiextensions.k8s.io/v1    : [
          -     [0]: "links.multicluster.linkerd.io"
            ]
          - Job.batch/batch/v1                                                       : [
          -     [0]: "namespace-metadata"
            ]
          - Role.rbac.authorization.k8s.io/rbac.authorization.k8s.io/v1              : [
          -     [0]: "namespace-metadata"
            ]
          - RoleBinding.rbac.authorization.k8s.io/rbac.authorization.k8s.io/v1       : [
          -     [0]: "namespace-metadata"
            ]
          - Secret/v1                                                                : [
          -     [0]: "linkerd-multicluster/linkerd-service-mirror-remote-access-default-token"
            ]
          - Server.policy.linkerd.io/policy.linkerd.io/v1beta1                       : [
          -     [0]: "linkerd-multicluster/gateway-proxy-admin"
          -     [1]: "linkerd-multicluster/service-mirror"
          -     [2]: "linkerd-multicluster/service-mirror-proxy-admin"
            ]
          - ServerAuthorization.policy.linkerd.io/policy.linkerd.io/v1beta1          : [
          -     [0]: "linkerd-multicluster/proxy-admin"
          -     [1]: "linkerd-multicluster/service-mirror"
          -     [2]: "linkerd-multicluster/service-mirror-proxy-admin"
            ]
          - ServiceAccount/v1                                                        : [
          -     [0]: "linkerd-multicluster/linkerd-service-mirror-remote-access-default"
          -     [1]: "namespace-metadata"
            ]
        }
Resources:
    ~ 3 to update
    10 unchanged
And here is the related code:
import * as k8s from "@pulumi/kubernetes";
import { Release } from "@pulumi/kubernetes/helm/v3";
import { Input } from "@pulumi/pulumi";
import * as tls from "@pulumi/tls";

export const privateKeyCa = new tls.PrivateKey("linkerd-mtls-private-key-ca", {
  algorithm: "ECDSA",
  ecdsaCurve: "P256",
});

export const certCa = new tls.SelfSignedCert("linkerd-mtls-ca-cert", {
  privateKeyPem: privateKeyCa.privateKeyPem,
  allowedUses: [
    "digital_signature",
    "cert_signing",
    "client_auth",
    "server_auth",
    "any_extended",
  ],
  validityPeriodHours: 24 * 7 * 52 * 2, // 2 years
  isCaCertificate: true,
  subject: {
    commonName: "linkerd-root",
  },
});

export const privateKeyIntermediate = new tls.PrivateKey(
  "linkerd-mtls-private-key-intermediate",
  {
    algorithm: "ECDSA",
    ecdsaCurve: "P256",
  }
);

export const certRequestIntermediate = new tls.CertRequest(
  "linkerd-mtls-intermediate-certrequest",
  {
    privateKeyPem: privateKeyIntermediate.privateKeyPem,
    subject: {
      commonName: "linkerd-intermediate",
    },
  }
);

export const certIntermediate = new tls.LocallySignedCert(
  "linkerd-mtls-intermediate-cert",
  {
    allowedUses: [
      "digital_signature",
      "cert_signing",
      "client_auth",
      "server_auth",
      "any_extended",
    ],
    caCertPem: certCa.certPem,
    caPrivateKeyPem: privateKeyCa.privateKeyPem,
    certRequestPem: certRequestIntermediate.certRequestPem,
    validityPeriodHours: 24 * 7 * 52 * 2, // 2 years
    isCaCertificate: true,
  }
);

const linkerdNamespace = new k8s.core.v1.Namespace("linkerd-namespace", {
  metadata: {
    name: "linkerd",
  },
});

const linkerdCrds = () =>
  new k8s.helm.v3.Release(
    "linkerd-crds",
    {
      chart: "linkerd-crds",
      version: "1.4.0",
      repositoryOpts: {
        repo: "https://helm.linkerd.io/stable",
      },
      namespace: "linkerd",
      name: "linkerd-crds",
      atomic: true,
      timeout: 60,
      skipAwait: false,
    },
    { dependsOn: linkerdNamespace }
  );

const linkerdCP = (
  crds: k8s.helm.v3.Release,
  caCertPem: Input<string>,
  intCertPem: Input<string>,
  intPrivkeyPem: Input<string>
) =>
  new k8s.helm.v3.Release(
    "linkerd-control-plane",
    {
      chart: "linkerd-control-plane",
      version: "1.9.3",
      repositoryOpts: {
        repo: "https://helm.linkerd.io/stable",
      },
      namespace: "linkerd",
      name: "linkerd-control-plane",
      atomic: true,
      timeout: 150,
      values: {
        imagePullSecrets: [{ name: "regcred" }],
        cniEnabled: false,
        identityTrustAnchorsPEM: caCertPem,
        identity: {
          issuer: {
            tls: {
              crtPEM: intCertPem,
              keyPEM: intPrivkeyPem,
            },
          },
        },
        proxyInit: {
          runAsRoot: true,
          // iptablesMode: "nft", // legacy (default) or nft
        },
        //   controlPlaneTracingNamespace: "linkerd",
      },
    },
    { dependsOn: crds }
  );

const linkerdMulticlusterNs = new k8s.core.v1.Namespace(
  "linkerd-multicluster-namespace",
  {
    metadata: {
      name: "linkerd-multicluster",
    },
  }
);

const linkerdMulticluster = (controlPlane: Release, elbId?: string) =>
  new k8s.helm.v3.Release(
    "linkerd-multicluster",
    {
      chart: "linkerd-multicluster",
      version: "30.2.3",
      repositoryOpts: {
        repo: "https://helm.linkerd.io/stable",
      },
      namespace: "linkerd-multicluster",
      name: "linkerd-multicluster",
      atomic: true,
      timeout: 150,
      values: {
        gateway: {
          enabled: elbId ? true : false,
          serviceAnnotations: {
            "kubernetes.io/elb.id": elbId,
          },
        },
      },
    },
    { dependsOn: [linkerdMulticlusterNs, controlPlane] }
  );

export const linkerd = (
  caCertPem: Input<string>,
  intCertPem: Input<string>,
  intPrivkeyPem: Input<string>,
  elbId?: string
) => {
  const crds = linkerdCrds();
  const cp = linkerdCP(crds, caCertPem, intCertPem, intPrivkeyPem);
  const mc = linkerdMulticluster(cp, elbId);
};

In the slack community I have been asked for the output of pulumi plugin ls, here is the output of this command:

❯ pulumi plugin ls
NAME        KIND      VERSION  SIZE   INSTALLED  LAST USED
kubernetes  resource  3.21.4   82 MB  1 day ago  1 day ago
tls         resource  4.6.1    33 MB  1 day ago  1 day ago

Steps to reproduce

pulumi describe

Expected Behavior

The expected behaviour is that Pulumi would not try to apply changes on this ones particular colleagues computer.

Actual Behavior

Described above. With all the same code and state file we get different behaviour.

Output of pulumi about

`pulumi about`
CLI          
Version      3.45.0
Go Version   go1.19.3
Go Compiler  gc

Plugins
NAME        VERSION
kubernetes  3.21.4
nodejs      unknown
tls         4.6.1

Host     
OS       darwin
Version  13.0
Arch     arm64

This project is written in nodejs: executable='/opt/homebrew/bin/node' version='v18.11.0'

Backend        
Name           MyUser
URL            s3://pulumi-state?endpoint=someendpoint.private.com&region=eu-de
User           MyUser
Organizations  

Dependencies:
NAME                 VERSION
@pulumi/kubernetes   3.21.4
@pulumi/kubernetesx  0.1.6
@pulumi/pulumi       3.42.0
@pulumi/tls          4.6.1
@types/node          14.18.32

Pulumi locates its logs in /var/folders/wm/v9tjlc991x91t61krycxdq_h0000gn/T/ by default

Additional context

Link to the slack discussion: https://pulumi-community.slack.com/archives/CRFURDVQB/p1667407661882239

Contributing

Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).

@zeljkobekcic zeljkobekcic added kind/bug Some behavior is incorrect or out of spec needs-triage Needs attention from the triage team labels Nov 3, 2022
@iwahbe iwahbe changed the title Different behaviour of Pulumi Pulumi Up behavior differs between machines Nov 3, 2022
@iwahbe
Copy link
Member

iwahbe commented Nov 3, 2022

@zeljkobekcic Thanks for raising the issue.

Could you please give us the logs from pulumi up, pulumi about and pulumi plugin ls on both machines? We are unable to reproduce the bug as is.

@zeljkobekcic
Copy link
Author

zeljkobekcic commented Nov 4, 2022

@iwahbe One colleague had a "corrupted" helm repository. By removing the helm config rm -rf ~/.config/helm the problem was solved (in our case). This explains why it behaved differently between multiple machines.

@iwahbe iwahbe added the resolution/no-repro This issue wasn't able to be reproduced label Nov 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Some behavior is incorrect or out of spec needs-triage Needs attention from the triage team resolution/no-repro This issue wasn't able to be reproduced
Projects
None yet
Development

No branches or pull requests

2 participants