Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deployment resource state does not update #3026

Closed
soujiro32167 opened this issue May 24, 2024 · 2 comments · Fixed by #3030
Closed

Deployment resource state does not update #3026

soujiro32167 opened this issue May 24, 2024 · 2 comments · Fixed by #3030
Assignees
Labels
area/yaml kind/bug Some behavior is incorrect or out of spec resolution/fixed This issue was fixed

Comments

@soujiro32167
Copy link

What happened?

Using a kubernetes provider with renderYamlToDirectory, I want to create a deployment and configmap.

The resources get created successfully. After a second pulumi up, no changes are observed.

However, after updating the deployment, the pulumi state will not update. Updating the configmap works fine

To reproduce:

  1. Run the reproducer file with a clean stack: pulumi up -s repro
  2. Update the deployment - containerPort: 9090
  3. pulumi up -s repro --diff shows the diff correctly
  4. Click yes to apply the changes
  5. pulumi up -s repro --diff shows the same diff, even though the change was already applied

Note: the yaml yamls/apps_v1-deployment-myns-my-deployment.yaml gets updated correctly

Example

import * as k8s from "@pulumi/kubernetes"
import * as pulumi from "@pulumi/pulumi"

const provider = new k8s.Provider("k8s", {
    renderYamlToDirectory: 'yamls',
    namespace: 'myns'
})

const configMap = new k8s.core.v1.ConfigMap("my-configmap", {
    metadata: {
        name: "my-configmap",
    },
    data: {
        "key1": "value1",
        "key2": "value2",
    },
}, {provider})

const deployment = new k8s.apps.v1.Deployment("my-deployment", {
    metadata: {
        name: "my-deployment",
    },
    spec: {
        replicas: 1,
        selector: {
            matchLabels: {
                app: "my-deployment",
            },
        },
        template: {
            metadata: {
                labels: {
                    app: "my-deployment",
                },
            },
            spec: {
                containers: [
                    {
                        name: "my-deployment",
                        image: "nginx",
                        ports: [
                            {
                                containerPort: 8080,
                            },
                        ],
                    },
                ],
            },
        },
    },
}, {provider})

Output of pulumi about

➜  typescript git:(main) ✗ pulumi about
CLI          
Version      3.116.1
Go Version   go1.22.2
Go Compiler  gc

Plugins
KIND      NAME        VERSION
resource  aws         6.33.1
resource  kafka       3.7.1
resource  kubernetes  4.11.0
language  nodejs      unknown
resource  postgresql  3.11.0

Host     
OS       darwin
Version  14.4.1
Arch     arm64

This project is written in nodejs: executable='***' version='v20.11.1'

Backend        
Name           ***
URL            file://~
User           ***
Organizations  
Token type     personal

Dependencies:
NAME                VERSION
@pulumi/kafka       3.7.1
@pulumi/kubernetes  4.11.0
@pulumi/pulumi      3.115.1
ts-pattern          5.0.6
yaml                2.4.2
typescript          5.4.5
@pulumi/aws         6.33.1
@pulumi/postgresql  3.11.0
@types/node         20.10.5

Pulumi locates its logs in /var/folders/s_/rr4bg4qx7hv__l9g4fqw987w0000gq/T/ by default
warning: Failed to get information about the current stack: No current stack

Additional context

No response

Contributing

Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).

@soujiro32167 soujiro32167 added kind/bug Some behavior is incorrect or out of spec needs-triage Needs attention from the triage team labels May 24, 2024
@rquitales
Copy link
Contributor

Thanks for reporting this issue @soujiro32167. I am able to reproduce this. Note that the same issue also exists for ConfigMaps if we set CMs to be mutable within our k8s provider setup. The issue doesn't exist for CMs in the repro, since our provider is doing a replacement, which goes through a different flow for saving state compared to an update flow.

The bug most likely is being triggered here:

obj := checkpointObject(newInputs, oldLive, newResInputs, initialAPIVersion, fieldManager)

We should store newInputs instead of oldLive similar to what we do in the Create flow (ref:

obj := checkpointObject(newInputs, newInputs, newResInputs, initialAPIVersion, fieldManager)
).

@rquitales rquitales added area/yaml and removed needs-triage Needs attention from the triage team labels May 28, 2024
rquitales added a commit that referenced this issue May 29, 2024
### Proposed changes

Always store `newInputs` to avoid spurious diffs in renderYaml updates.
Storing `oldLive` in state will result in the Pulumi state never
updating after the initial create.

Testing: added a new step within the `TestRenderYAML` that exercises the
update flow. This test fails without the changes in this PR.

### Related issues (optional)

Fixes: #3026
@pulumi-bot pulumi-bot added the resolution/fixed This issue was fixed label May 29, 2024
@soujiro32167
Copy link
Author

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/yaml kind/bug Some behavior is incorrect or out of spec resolution/fixed This issue was fixed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants