Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Panic creating ConfigMap. #559

Closed
oliverholliday opened this issue May 10, 2019 · 5 comments · Fixed by #572
Closed

Panic creating ConfigMap. #559

oliverholliday opened this issue May 10, 2019 · 5 comments · Fixed by #572
Assignees
Labels
area/providers customer/feedback Feedback from customers kind/bug Some behavior is incorrect or out of spec p1 A bug severe enough to be the next item assigned to an engineer
Milestone

Comments

@oliverholliday
Copy link

oliverholliday commented May 10, 2019

Please forgive the convoluted example but I'm getting a panic from the following code. I've stripped the functionality down to the bare minimum to cause it. If I create the KeyVault in one update and then create the configmap in the second it all works.

The method to get the kube config from a keyvault works everywhere else, but fails in this case.

Panic output is at the bottom.

import * as pulumi from "@pulumi/pulumi";
import * as azure from "@pulumi/azure"
import * as kubernetesInput from "@pulumi/kubernetes/types/input";
import * as kubernetes from "@pulumi/kubernetes";

export const main = async () => {

    const testKeyVault = new azure.keyvault.KeyVault("kv-test", {
        name: "kv-test",
        resourceGroupName: "Infrastructure",
        location: "westeurope",
        sku: { name: "standard" },
        tenantId: azure.config.tenantId || ""
    });

    const getConfigMapSpec = (
        namespace: pulumi.Input<string>,
        key: pulumi.Input<string>,
        contents: pulumi.Input<Object>,
        labels: any = {},
    ): kubernetesInput.core.v1.ConfigMap => pulumi.all([
        key,
        pulumi.output(contents).apply(x => JSON.stringify(x))
    ]).apply(([
        fileName,
        contents
    ]) => {
        let data: { [key: string]: string; } = {};
        data[fileName] = contents;

        return ({
            metadata: {
                namespace: namespace,
                labels: labels,
            },
            data: {
                ...data
            }
        });
    });

    const config = pulumi.all([
        testKeyVault.vaultUri
    ]).apply(([
        keyVaultUri
    ]) =>
        new Object({
            KeyVault: { BaseUrl: keyVaultUri }
        })
    );

    const getProvider = async (
        keyVault: azure.keyvault.GetKeyVaultResult,
        kubeConfigSecretName: string
    ): Promise<kubernetes.Provider> => new Promise(async (resolve, reject) => {
        const kubeConfig = pulumi.output(keyVault.id).apply(x => azure.keyvault.getSecret({
            keyVaultId: x,
            name: kubeConfigSecretName
        }));
        pulumi.output(kubeConfig.value).apply(x => {
            console.log(x);
            resolve(new kubernetes.Provider("Cluster", { kubeconfig: x }))
        });
    });

    const coreKeyVault = await azure.keyvault.getKeyVault({ name: "test", resourceGroupName: "test" });
    const provider = await getProvider(coreKeyVault, "Cluster-KubeConfig");

    new kubernetes.core.v1.ConfigMap(
        "test-configmap",
        getConfigMapSpec("default", "appsettings.json", config),
        { provider: provider }
    );

};
main();
panic: interface conversion: interface {} is resource.Computed, not map[string]interface {}
    goroutine 159 [running]:
    github.com/pulumi/pulumi-kubernetes/pkg/metadata.SetAnnotation(...)
        /home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/pkg/metadata/annotations.go:52
    github.com/pulumi/pulumi-kubernetes/pkg/metadata.SetAnnotationTrue(...)
        /home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/pkg/metadata/annotations.go:67
    github.com/pulumi/pulumi-kubernetes/pkg/metadata.AssignNameIfAutonamable(0xc00087f978, 0xc0010e80a7, 0x14)
        /home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/pkg/metadata/naming.go:35 +0x385
    github.com/pulumi/pulumi-kubernetes/pkg/provider.(*kubeProvider).Check(0xc0000e9e00, 0x1950600, 0xc0010ea090, 0xc00091af40, 0xc0000e9e00, 0x1523301, 0xc00091af80)
        /home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/pkg/provider/provider.go:269 +0x15aa
    github.com/pulumi/pulumi-kubernetes/vendor/github.com/pulumi/pulumi/sdk/proto/go._ResourceProvider_Check_Handler.func1(0x1950600, 0xc0010ea090, 0x1652c80, 0xc00091af40, 0x1671ea0, 0x278d518, 0x1950600, 0xc0010ea090)
        /home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/vendor/github.com/pulumi/pulumi/sdk/proto/go/provider.pb.go:1303 +0x8d
    github.com/pulumi/pulumi-kubernetes/vendor/github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1(0x1950600, 0xc0010b5ec0, 0x1652c80, 0xc00091af40, 0xc00093eb40, 0xc00093eb60, 0x0, 0x0, 0x19136e0, 0xc0003f04d0)
        /home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/vendor/github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc/server.go:61 +0x367
    github.com/pulumi/pulumi-kubernetes/vendor/github.com/pulumi/pulumi/sdk/proto/go._ResourceProvider_Check_Handler(0x1673660, 0xc0000e9e00, 0x1950600, 0xc0010b5ec0, 0xc0009405f0, 0xc0003f2ea0, 0x1950600, 0xc0010b5ec0, 0xc000a5c780, 0x125)
        /home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/vendor/github.com/pulumi/pulumi/sdk/proto/go/provider.pb.go:1305 +0x15f
    github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc.(*Server).processUnaryRPC(0xc000461200, 0x196b800, 0xc000437500, 0xc00050b100, 0xc0003fd3b0, 0x275fa40, 0x0, 0x0, 0x0)
        /home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc/server.go:972 +0x477
    github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc.(*Server).handleStream(0xc000461200, 0x196b800, 0xc000437500, 0xc00050b100, 0x0)
        /home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc/server.go:1252 +0xdad
    github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc0003a4010, 0xc000461200, 0x196b800, 0xc000437500, 0xc00050b100)
        /home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc/server.go:691 +0xa6
    created by github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc.(*Server).serveStreams.func1
        /home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc/server.go:689 +0xa8
@lukehoban lukehoban added this to the 0.23 milestone May 10, 2019
@lblackstone lblackstone added area/providers customer/feedback Feedback from customers kind/bug Some behavior is incorrect or out of spec labels May 10, 2019
@lblackstone
Copy link
Member

I suspect this is caused by computing the ConfigMap metadata rather than creating it directly. The provider currently tries to set an autoname annotation during the preview phase, and I don't think it's handling this case correctly since the metadata isn't yet available.

@lblackstone
Copy link
Member

Looks like the OP was able to work around by creating the resources in separate updates, so I'm going to punt on this issue for M23

@lblackstone lblackstone removed this from the 0.23 milestone May 17, 2019
@joeduffy joeduffy added this to the 0.23 milestone May 18, 2019
@joeduffy joeduffy assigned ellismg and unassigned lblackstone May 18, 2019
@joeduffy
Copy link
Member

We can't let panics in the CLI go for another sprint. Assigning @ellismg for load balancing.

@hausdorff
Copy link
Contributor

hausdorff commented May 18, 2019

This is caused because kubeProvider#Check calls metadata.AssignNameIfAutonamable before it checks !hasComputedValue. Because the annotations are computed values in this example, our type assertions fail because it's a computed value, and not a map. @pgavlin @ellismg we should chat about this on Monday since I've forgotten of the protocol -- but I believe we can fix this by simply setting that value if it's computed?

@oliverholliday
Copy link
Author

oliverholliday commented May 20, 2019

I hit the same panic a few times in different places throughout the last week as it seems to be triggered whenever you use an auto-named kube configmap that contains a value from a newly created resource.

Since it panics during the preview it sent me on a few wild goose chases trying to diagnose which resource is causing the issue - often the output log is something like "Event hub transport closing. <<panic stacktrace>>".

I was able to work around it in this case with keyvaults but it involved publishing a pre-release npm package with configmap creation commented out then another publish to create that so it was non-trivial. One example where I couldn't work around it was creating an Azure EventHubNamespaceAuthorizationRule and save its keys to a kube configmap - its not possible to create one of those rules and then retrieve its system-generated keys in a separate step.

Thanks a lot.

ellismg added a commit that referenced this issue May 23, 2019
During preview, an object's metadata bag may be computed (or be known
but contain values which are computed). This could happen, for
example, by using `apply` to take an output property from a yet to be
created resource and use it to build part of an object's metadata,
like we saw in #559.

In these cases, we incorrectly panic while attempting to extract out
the metadata.lables or metadata.annotations members of the metadata
object, when trying to set an annotation or label.

To fix this, we now treat requests to set annotations or labels as
no-ops if the metadata object is computed (or the label or annotation
values inside the metadata object are computed). This allows preview
to continue, as expected. During a real update, we will not have
computed values and so we will be able to correctly set the labels as
we expected.

Fixes #559
lblackstone pushed a commit that referenced this issue May 29, 2019
During preview, an object's metadata bag may be computed (or be known
but contain values which are computed). This could happen, for
example, by using `apply` to take an output property from a yet to be
created resource and use it to build part of an object's metadata,
like we saw in #559.

In these cases, we incorrectly panic while attempting to extract out
the metadata.labels or metadata.annotations members of the metadata
object, when trying to set an annotation or label.

To fix this, we now treat requests to set annotations or labels as
no-ops if the metadata object is computed (or the label or annotation
values inside the metadata object are computed). This allows preview
to continue, as expected. During a real update, we will not have
computed values and so we will be able to correctly set the labels as
we expected.

Fixes #559
@infin8x infin8x added the p1 A bug severe enough to be the next item assigned to an engineer label Jul 10, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/providers customer/feedback Feedback from customers kind/bug Some behavior is incorrect or out of spec p1 A bug severe enough to be the next item assigned to an engineer
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants