Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to correct drift with pulumi up --refresh after external changes #2404

Closed
Tracked by #2362
rquitales opened this issue May 13, 2023 · 3 comments
Closed
Tracked by #2362
Assignees
Labels
kind/bug Some behavior is incorrect or out of spec resolution/fixed This issue was fixed

Comments

@rquitales
Copy link
Contributor

rquitales commented May 13, 2023

What happened?

I'm unable to correct drift in Kubernetes resources when running pulumi up --refresh, or pulumi refresh; pulumi up after the resources on the cluster have been modified with another tool/process. I have tested this with both CSA and SSA, and both seem to have issues with how they are handled.

In CSA, the resources on the cluster are never updated, but pulumi refresh indicates a diff that doesn't cause pulumi up to trigger a diff to run.

̶I̶n̶ ̶S̶S̶A̶,̶ ̶w̶i̶t̶h̶ ̶a̶ ̶C̶o̶n̶f̶i̶g̶M̶a̶p̶,̶ ̶t̶h̶e̶ ̶r̶e̶s̶o̶u̶r̶c̶e̶ ̶i̶s̶ ̶r̶e̶c̶r̶e̶a̶t̶e̶d̶ ̶w̶h̶i̶c̶h̶ ̶m̶a̶y̶ ̶n̶o̶t̶ ̶b̶e̶ ̶i̶d̶e̶a̶l̶ ̶i̶f̶ ̶w̶e̶'̶r̶e̶ ̶u̶s̶i̶n̶g̶ ̶a̶u̶t̶o̶-̶g̶e̶n̶e̶r̶a̶t̶e̶d̶ ̶n̶a̶m̶e̶s̶ ̶a̶n̶d̶ ̶t̶h̶e̶ ̶n̶a̶m̶e̶ ̶o̶f̶ ̶t̶h̶e̶ ̶C̶M̶ ̶w̶i̶l̶l̶ ̶a̶l̶s̶o̶ ̶b̶e̶ ̶r̶e̶c̶r̶e̶a̶t̶e̶d̶.̶ ̶W̶i̶t̶h̶ ̶c̶h̶a̶n̶g̶e̶s̶ ̶t̶o̶ ̶a̶ ̶d̶e̶p̶l̶o̶y̶m̶e̶n̶t̶ ̶s̶p̶e̶c̶ ̶h̶o̶w̶e̶v̶e̶r̶,̶ ̶̶p̶u̶l̶u̶m̶i̶ ̶u̶p̶̶ ̶u̶p̶d̶a̶t̶e̶s̶ ̶i̶n̶ ̶p̶l̶a̶c̶e̶.̶ ̶T̶h̶e̶ ̶h̶a̶n̶d̶l̶i̶n̶g̶ ̶o̶f̶ ̶b̶o̶t̶h̶ ̶C̶M̶ ̶a̶n̶d̶ ̶d̶e̶p̶l̶o̶y̶m̶e̶n̶t̶s̶ ̶s̶h̶o̶u̶l̶d̶ ̶b̶e̶ ̶c̶o̶n̶s̶i̶s̶t̶e̶n̶t̶ ̶i̶n̶ ̶t̶h̶a̶t̶ ̶t̶h̶e̶y̶ ̶a̶r̶e̶ ̶b̶o̶t̶h̶ ̶u̶p̶d̶a̶t̶e̶d̶ ̶i̶n̶ ̶p̶l̶a̶c̶e̶.̶
Edit: Never-mind. ConfigMaps are treated as immutable by default, with an opt-in setting for mutability.

Expected Behavior

pulumi up --refresh should be able to correct any spec changes to Kubernetes resources made externally when using CSA.

Steps to reproduce

I've added 2 test cases that show cases this behaviour, one each for CSA and SSA in #2403.
Test run: https://github.com/pulumi/pulumi-kubernetes/actions/runs/4966037016/jobs/8887293231?pr=2403

FAIL: TestClientSideDriftCorrectCSA
FAIL: TestClientSideDriftCorrectSSA

Manual Reproduction Steps:

  1. pulumi new kubernetes-typescript
  2. Add a new ConfigMap resource with some data eg: foo: bar
  3. pulumi up --yes
  4. Edit the ConfigMap and/or deployment (eg kubectl patch cm <cm-name> -p '{"data": {"foo": "newValue"}}')
  5. pulumi up --refresh --yes
  6. Get the ConfigMap or deployment object and note that they are not updated for CSA (kubectl get cm <cm-name>)

Output of pulumi about

CLI
Version 3.67.0
Go Version go1.20.4
Go Compiler gc

Plugins
NAME VERSION
kubernetes 3.27.1
nodejs unknown

Host
OS darwin
Version 13.3.1
Arch arm64

This project is written in nodejs: executable='/opt/homebrew/bin/node' version='v19.6.0'

TYPE URN
pulumi:pulumi:Stack urn:pulumi:dev::k8s-resourceVersions::pulumi:pulumi:Stack::k8s-resourceVersions-dev
pulumi:providers:kubernetes urn:pulumi:dev::k8s-resourceVersions::pulumi:providers:kubernetes::k8s
kubernetes:apps/v1:Deployment urn:pulumi:dev::k8s-resourceVersions::kubernetes:apps/v1:Deployment::nginx

Found no pending operations associated with dev

Dependencies:
NAME VERSION
@pulumi/kubernetes 3.27.1
@pulumi/kubernetesx 0.1.6
@pulumi/pulumi 3.67.0
@types/node 16.18.29

Additional context

This behaviour has been observed in all v3.X.X releases.

Contributing

Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).

@rquitales rquitales added kind/bug Some behavior is incorrect or out of spec needs-triage Needs attention from the triage team p1 A bug severe enough to be the next item assigned to an engineer impact/regression Something that used to work, but is now broken labels May 13, 2023
@rquitales rquitales removed the needs-triage Needs attention from the triage team label May 15, 2023
@rquitales rquitales self-assigned this May 15, 2023
@lblackstone
Copy link
Member

Could this be the same issue as #694? I can't recall if kubectl patch updates the lastAppliedConfiguration annotation, but that affects CSA diffing.

I'm reviewing the linked PR now, and it looks like the code changes are to relatively old parts of the code. The most recent edit I saw was from August 2022, so I'd be surprised if this was a regression. It may be incorrect, but if so, I think it's been that way for a long time.

@rquitales
Copy link
Contributor Author

Yes, this behaviour has been in the provider for a long time, and the linked issue is of the same behaviour. lastAppliedConfiguration is only updated by kubectl apply, so this issue would persist for kubectl patch, or other controllers/tooling that modify the resources.

For now, I'll downgrade this issue from a P1 since it seems like a long standing issue we've had, and it doesn't appear to be a pressing concern reported by other users.

The next steps would be to expand our test suites so this behaviour change can be merged with more confidence.

@rquitales rquitales removed the p1 A bug severe enough to be the next item assigned to an engineer label May 15, 2023
@rquitales rquitales mentioned this issue May 23, 2023
25 tasks
@lblackstone lblackstone added resolution/fixed This issue was fixed and removed impact/regression Something that used to work, but is now broken labels Jul 12, 2023
@lblackstone
Copy link
Member

Fixed in #2445

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Some behavior is incorrect or out of spec resolution/fixed This issue was fixed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants