Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix provider diff #1869

Merged
merged 2 commits into from
Jan 14, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
## HEAD (Unreleased)

- Disable last-applied-configuration annotation for replaced CRDs (https://github.com/pulumi/pulumi-kubernetes/pull/1868)
- Fix Provider config diffs (https://github.com/pulumi/pulumi-kubernetes/pull/1869)
- Fix replace for named resource using server-side diff (https://github.com/pulumi/pulumi-kubernetes/pull/1870)

## 3.14.0 (January 12, 2022)
Expand Down
30 changes: 10 additions & 20 deletions provider/pkg/provider/provider.go
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@ import (
"context"
"encoding/json"
"fmt"
pulumischema "github.com/pulumi/pulumi/pkg/v3/codegen/schema"
"io/ioutil"
"net/http"
"net/url"
Expand All @@ -46,6 +45,7 @@ import (
"github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/logging"
"github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/metadata"
"github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/openapi"
pulumischema "github.com/pulumi/pulumi/pkg/v3/codegen/schema"
"github.com/pulumi/pulumi/pkg/v3/resource/provider"
"github.com/pulumi/pulumi/sdk/v3/go/common/diag"
"github.com/pulumi/pulumi/sdk/v3/go/common/resource"
Expand Down Expand Up @@ -326,26 +326,16 @@ func (k *kubeProvider) DiffConfig(ctx context.Context, req *pulumirpc.DiffReques
}

// Check for differences in provider overrides.
if !reflect.DeepEqual(oldConfig, newConfig) {
diffs = append(diffs, "kubeconfig")
}
if olds["context"] != news["context"] {
diffs = append(diffs, "context")
}
if olds["cluster"] != news["cluster"] {
diffs = append(diffs, "cluster")
}
if olds["namespace"] != news["namespace"] {
diffs = append(diffs, "namespace")
}
if olds["enableDryRun"] != news["enableDryRun"] {
diffs = append(diffs, "enableDryRun")
}
if olds["renderYamlToDirectory"] != news["renderYamlToDirectory"] {
diffs = append(diffs, "renderYamlToDirectory")
diff := olds.Diff(news)
for _, k := range diff.ChangedKeys() {
diffs = append(diffs, string(k))

// If the render directory changes, all of the manifests will be replaced.
replaces = append(replaces, "renderYamlToDirectory")
// Handle any special cases.
switch k {
case "renderYamlToDirectory":
// If the render directory changes, all the manifests will be replaced.
replaces = append(replaces, "renderYamlToDirectory")
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we introspect on the schema and look through the config settings instead of hardcoding/checking these? Seems like that would avoid the whole possibility of falling out of sync here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think another option would be iterating the entire struct, and special-casing any we need to handle differently. That would at least give a default diff.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me know how you want to proceed. Do the above now or merge this and follow-up. I do worry about the drift so the more we can standardize around schema guiding operations the better. This would also be a great endorsement for common logic living in a library across providers.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I switched to iterating the keys. Let's figure out how to centralize this logic as a follow up.


// In general, it's not possible to tell from a kubeconfig if the k8s cluster it points to has
Expand Down