Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provider namespace field doesn't put new resources in the namespace #415

Closed
swgillespie opened this issue Feb 8, 2019 · 5 comments · Fixed by #538
Closed

Provider namespace field doesn't put new resources in the namespace #415

swgillespie opened this issue Feb 8, 2019 · 5 comments · Fixed by #538
Assignees
Labels
area/resource-management Issues related to Kubernetes resource provisioning, management, await logic, and semantics generally kind/bug Some behavior is incorrect or out of spec p1 A bug severe enough to be the next item assigned to an engineer
Milestone

Comments

@swgillespie
Copy link
Contributor

I'm not sure if this is intentional, but here's the repro:

const k8s = require("@pulumi/kubernetes");

const prov = new k8s.Provider("prov", {
    namespace: "mynamespace",
});

const nginx = new k8s.core.v1.Pod("nginx", {
    spec: {
        containers: [{
            name: "nginx",
            image: "nginx",
        }]
    }
}, { provider: prov });

exports.namespace = nginx.metadata.apply(m => m.namespace);

This program creates a Pod in the default namespace, where I'd expect it to be put in mynamespace. If this is intentional, let's document it better, since it kinda seems like the namespace argument on the provider applies a default namespace of mynamespace if no other namespace is provided.

@swgillespie swgillespie added this to the 0.21 milestone Feb 8, 2019
@swgillespie
Copy link
Contributor Author

Very possible this got fixed by #403.

@lblackstone
Copy link
Member

Confirmed that this behavior is still present after #403 merged.

@lblackstone lblackstone modified the milestones: 0.21, 0.22 Feb 13, 2019
@hausdorff
Copy link
Contributor

To be clear, I believe this is not a regression -- I very seriously doubt this ever worked.

That said, not one, not two, but three whole community members thumbsed-up the OP, so I think we should mark this as P1 for M22. Independent of that, though, solving this would very likely incidentally solve #217, which is another reason we should do it.

I'll go ahead and mark this as P1 proactively -- @lblackstone if you can't budget that into M22, or think it shouldn't be P1, let's chat.


Now, to the matter at hand. This issue is also more subtle than it might seem -- this is basically equivalent to kubectl's --namespace flag, and the semantics of that can't be done from the SDK alone. You can read more details in an exploratory PR I wrote, #467, but the short of it is that:

  • Some resources aren't namespaced.
  • Those resources can't be known ahead of time -- we must use the dynamic client, in the provider, to get this right.
  • Adding .metadata.namespace can be added to every resources submitted through that provider (this is ignored for core resources that are non-namespaced), but that means that non-namespaced resources will be replaced if the namespace changes, even though the namespace is not semantically meaningful.
  • This is also a pretty annoying issue for Helm, and the more I think about it, the more I want to just solve it right. :)

A reasonable proposal, I think, is that we probably want to be able to pass a namespace field as part of the args back in create/update/delete, and let the provider make this decision based on what's in the OpenAPI spec. This is good and clean, but requires some doing.

Thoughts?

@hausdorff hausdorff added kind/bug Some behavior is incorrect or out of spec priority/P1 area/resource-management Issues related to Kubernetes resource provisioning, management, await logic, and semantics generally labels Mar 4, 2019
@hausdorff
Copy link
Contributor

When we solve this problem, we should make consider erroring out when people try to set the namespace of a resource that is not namespaced. This leads to all sorts of annoying problems if it is silently accepted.

@lblackstone
Copy link
Member

Reverted fix due to some problems with CRDs. Will address and push a new fix.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/resource-management Issues related to Kubernetes resource provisioning, management, await logic, and semantics generally kind/bug Some behavior is incorrect or out of spec p1 A bug severe enough to be the next item assigned to an engineer
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants