-
Notifications
You must be signed in to change notification settings - Fork 114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Python Helm charts no longer deploy into EKS cluster #1565
Comments
@benesch I just tried this - and I was not able to reproduce the issue. With this program and from pulumi import ResourceOptions
from pulumi_kubernetes import Provider
from pulumi_kubernetes.helm.v3 import Chart, LocalChartOpts
prov = Provider("p", context="docker-desktop")
Chart("foo", LocalChartOpts(path="../foo", namespace="lager"))
Chart("foo2", LocalChartOpts(path="../foo", namespace="lager2"), ResourceOptions(provider=prov))
And with my current context set to a cluster that is not reachable, but with a local An update gives me the expected:
Do you have any more details on your repro for this? |
I was able to repro with the following program: import pulumi
import pulumi_aws as aws
import pulumi_eks as eks
import os
from pulumi import ResourceOptions
from pulumi_kubernetes.helm.v3 import Chart, ChartOpts, FetchOpts
base_name = "demo"
profile = os.environ["AWS_PROFILE"]
# Create an AWS provider instance using the named profile creds
# and current region.
uswest2 = aws.Provider("uswest2", region="us-west-2", profile=profile)
kubeconfig_opts = eks.KubeconfigOptionsArgs(profile_name=profile)
myekscluster = eks.Cluster(
base_name,
provider_credential_opts=kubeconfig_opts,
opts=pulumi.ResourceOptions(provider=uswest2)
)
Chart("nginx", ChartOpts(
chart="nginx",
values={"service": {"type": "ClusterIP"}},
fetch_opts=FetchOpts(
repo="https://charts.bitnami.com/bitnami"
)
), ResourceOptions(provider=myekscluster.provider))
Here are the relevant parts of the state file: Chart
Deployment (Chart sub-resource)
|
Also confirmed that it works as expected with Here's the updated Deployment state with the 3.0.0 provider:
|
Thanks for piecing together a standalone repro, @lblackstone! |
Debugging this - I'm seeing that the Chart object looks like this when the child
Note that the key in the It seems that something - most likely related to multi-language components? - is populating the providers map incorrectly, which is leading to the parent providers inheritance not working. This results in the child believing there is no provider to inherit from the parent for |
A bit of a shot in the dark, but I filed pulumi/pulumi#6693 a few weeks back about other RemoteComponent-related weirdness. Using |
The issue appears to be that https://github.com/pulumi/pulumi/blob/master/sdk/python/lib/pulumi/runtime/resource.py#L618 Either |
Edit: This wasn't the problem. |
After some more digging, I realized that my repro program contains a type error. In the Python, .NET, and Go SDKs, the Everything works as expected when I unwrap the import pulumi
import pulumi_aws as aws
import pulumi_eks as eks
import os
from pulumi import ResourceOptions
from pulumi_kubernetes.helm.v3 import Chart, ChartOpts, FetchOpts
base_name = "demo"
profile = os.environ["AWS_PROFILE"]
# Create an AWS provider instance using the named profile creds
# and current region.
uswest2 = aws.Provider("uswest2", region="us-west-2", profile=profile)
kubeconfig_opts = eks.KubeconfigOptionsArgs(profile_name=profile)
myekscluster = eks.Cluster(
base_name,
provider_credential_opts=kubeconfig_opts,
opts=pulumi.ResourceOptions(provider=uswest2)
)
myekscluster.provider.apply(lambda p: chart(p))
def chart(provider):
Chart("nginx", ChartOpts(
chart="nginx",
values={"service": {"type": "ClusterIP"}},
fetch_opts=FetchOpts(
repo="https://charts.bitnami.com/bitnami"
)
), ResourceOptions(provider=provider)) |
I narrowed down the change in behavior between v3.0.0 and v3.1.0 to this: https://github.com/pulumi/pulumi-kubernetes/pull/1539/files?file-filters%5B%5D=.py#diff-f2b60d028397820398a9fa1ebca34ef73fe594525259f511f39bfce01ef24e9fL177-L178 With this change reverted, the |
@benesch We're currently investigating a few options to fix this. As a workaround in the meantime, you can create a new Provider instance using the EKS cluster's kubeconfig, and it will work as you'd expect: import pulumi
import pulumi_aws as aws
import pulumi_eks as eks
import os
from pulumi import ResourceOptions
from pulumi_kubernetes.helm.v3 import Chart, ChartOpts, FetchOpts
from pulumi_kubernetes import Provider, ProviderArgs
base_name = "demo"
profile = os.environ["AWS_PROFILE"]
# Create an AWS provider instance using the named profile creds
# and current region.
uswest2 = aws.Provider("uswest2", region="us-west-2", profile=profile)
kubeconfig_opts = eks.KubeconfigOptionsArgs(profile_name=profile)
myekscluster = eks.Cluster(
base_name,
provider_credential_opts=kubeconfig_opts,
opts=pulumi.ResourceOptions(provider=uswest2)
)
provider = Provider("k8s", ProviderArgs(
kubeconfig=myekscluster.kubeconfig
))
Chart("nginx", ChartOpts(
chart="nginx",
values={"service": {"type": "ClusterIP"}},
fetch_opts=FetchOpts(
repo="https://charts.bitnami.com/bitnami"
)
), ResourceOptions(provider=provider)) |
Thanks, @lblackstone. For now we're happy to stick on v3.0.0, but I'll try out your workaround if we need to upgrade. |
pulumi/pulumi#7012 tracks making the |
Closing as this is now tracked in pulumi/pulumi#7012. |
I don't mean to sound ungrateful, but I'd like to advocate for reopening this issue until pulumi/pulumi#7012 is fixed, or there are some guardrails put in place here! While I understand that the aforementioned issue is tracking the root cause, it's not discoverable for folks who are just looking to understand why In particular, pulumi/pulumi#3383 means the bug presents as silently deploying into whatever cluster is currently active in the user's kubecfg. This is the kind of bug that can result in taking down prod. (We were lucky, and it only took down our staging environment, but that's only because I happened to have staging as my active cluster, not prod.) Since pulumi/pulumi#7012 looks like it's not a quick fix, it'd be great to get some some assertions in the SDK to at least prevent disaster, or at the very least some warnings in the docs. @lblackstone, thanks for the workaround, but I don't think it'll work for us, since swapping the provider out like that results in Pulumi attempting to replace all the resources in the cluster. Tried to work around this with provider aliases but looks like those aren't wired up yet (pulumi/pulumi#3979). |
Here's my workaround for now: class LazyResource:
def __init__(self, resource):
self.resource = pulumi.Output.from_input(resource)
@property
def urn(self):
return self.resource.apply(lambda r: r.urn)
@property
def id(self):
return self.resource.apply(lambda r: r.id)
@property
def __class__(self):
# Bypass https://github.com/pulumi/pulumi/blob/b7d403204/sdk/python/lib/pulumi/resource.py#L460-L462.
return pulumi.Resource
class LazyProvider(LazyResource):
def __init__(self, package, resource):
super().__init__(resource)
self.package = package
@property
def __class__(self):
return pulumi.ProviderResource
cluster = eks.Cluster(...)
_provider = cluster.provider
cluster.__dict__["provider"] = LazyProvider("kubernetes", _provider) |
Hi guys, @benesch Could you please tell me if you still use 3.0.0 to mitigate this issue or found another workaround? How do you pass the kubernetes provider to the rest of Chart resources? |
@eliskovets we've been using the workaround provided by @lblackstone in #1565 (comment) to good effect for a while now. |
Since #1539, when deploying a Helm v3 chart in Python, the resources in that chart are no longer deployed to provider specified on the chart. This means that e.g. attempting to deploy a chart to an EKS cluster in Python silently deploys it to whatever is in your
kubeconfig
instead.Steps to reproduce
Assuming
eks.cluster
is an EKS cluster resource, run a program like:The chart's resources will get deployed to whatever
kubectl
is configured to use, rather than the EKS cluster.Likely cause
This is almost certainly fallout from #1539. v3.0.0 does not exhibit this bug, but v3.1.0 (which includes #1539) does.
cc @lukehoban @lblackstone
The text was updated successfully, but these errors were encountered: