Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot create a Helm chart - timeout during pulumi up update preview phase #2985

Closed
JinLisek opened this issue May 1, 2024 · 0 comments · Fixed by #2992
Closed

Cannot create a Helm chart - timeout during pulumi up update preview phase #2985

JinLisek opened this issue May 1, 2024 · 0 comments · Fixed by #2992
Assignees
Labels
area/helm kind/bug Some behavior is incorrect or out of spec resolution/fixed This issue was fixed

Comments

@JinLisek
Copy link

JinLisek commented May 1, 2024

What happened?

I try to create a Kubernetes cluster on Proxmox VM (created via Pulumi).

When I run pulumi up I need to wait a while until kubernetes resources show up in preview, I mean these:
image

When they finally show up, after about 10 seconds the pulumi preview fails with exception:

Exception: invoke of kubernetes:helm:template failed: invocation of kubernetes:helm:template returned an error: failed to generate YAML for specified Helm chart: could not get server version from Kubernetes: Get "https://192.168.1.101:6443/version?timeout=32s": dial tcp 192.168.1.101:6443: i/o timeout

192.168.1.101 is ip address of my master node that is not yet created (it should get created during pulumi up).

I made it so kubernetes provider depends on command that downloads kube config from master node. Namespace depends on the provider, and helm chart depends on the namespace... So I would expect the helm chart to request version from master node during pulumi up, when it's time to create the chart... Not during the preview .

Example

import ipaddress

import pulumi
import pulumi_command as command
import pulumi_kubernetes as kubernetes
import pulumi_proxmoxve as proxmox

...

master_node_ip = ipaddress.IPv4Interface(address="192.168.1.101/24")

provider = proxmox.Provider(
    resource_name="proxmoxve-provider",
    endpoint="https://192.168.1.69:8006",
    username="root@pam",
    password=password,
    insecure=True,
    ssh=proxmox.ProviderSshArgs(agent=True, private_key=pve_authorized_private_key),
)

# creates proxmox.vm.VirtualMachine, details not important
master_node = create_virtual_machine(
    provider=provider,
    name="lol-k8s-master-0001",
    password=password,
    authorized_keys=[vm_authorized_public_key],
    ip_interface=master_node_ip,
)

# install and setup master node using kubeadm init
setup_kubernetes_master = command.local.Command(
    resource_name="ansible-setup-for-lol-k8s-master-0001",
    create=master_node.ipv4_addresses.apply(
        lambda ips: f"ansible-playbook ansible/setup_kubernetes_master.yaml --inventory {get_first_remote_ip_address(ips)}, --vault-password-file ./password.txt"
    ),
)

# download kube config locally
copy_kube_config_command = command.local.Command(
    resource_name="copy-kube-config",
    create=f"scp -r lol@{master_node_ip.ip}:/home/lol/.kube/ ~/",
    delete="rm ~/.kube/ -r",
    opts=pulumi.ResourceOptions(
        depends_on=setup_kubernetes_master,
        custom_timeouts=pulumi.CustomTimeouts(create="10s"),
    ),
)

# note that kubernetes provider depends on copy_kube_config_command 
kubernetes_provider = kubernetes.Provider(
    resource_name="k8s",
    kubeconfig="~/.kube/config",
    opts=pulumi.ResourceOptions(depends_on=copy_kube_config_command),
)

tigera_operator_namespace = kubernetes.core.v1.Namespace(
    resource_name="tigera-operator-namespace",
    metadata=kubernetes.meta.v1.ObjectMetaArgs(name="tigera-operator"),
    opts=pulumi.ResourceOptions(provider=kubernetes_provider),
)


kubernetes.helm.v3.Chart(
    release_name="calico",
    config=kubernetes.helm.v3.LocalChartOpts(
        path="./helm/calico/tigera-operator", namespace="tigera-operator"
    ),
    opts=pulumi.ResourceOptions(depends_on=tigera_operator_namespace),
)

Output of pulumi about

CLI
Version 3.113.3
Go Version go1.22.2
Go Compiler gc

Plugins
KIND NAME VERSION
resource command 0.10.0
resource kubernetes 4.11.0
resource proxmoxve 6.4.1
language python unknown

Host
OS ubuntu
Version 22.04
Arch x86_64

This project is written in python: executable='/home/lol/repos/infra/.venv/bin/python3' version='3.12.1'

Current Stack: lolol/infra/dev

Found no resources associated with dev

Found no pending operations associated with dev

Backend
Name pulumi.com
URL https://app.pulumi.com/lolol
User lolol
Organizations lolol
Token type personal

Dependencies:
NAME VERSION
ansible 9.5.1
mypy 1.10.0
pip 23.3.1
pulumi_command 0.10.0
pulumi_kubernetes 4.11.0
pulumi_proxmoxve 6.4.1

Pulumi locates its logs in /tmp by default

Additional context

When I comment out the kubernetes stuff and run pulumi up, Pulumi successfully setups my master node - all pods (except coredns) are running, node is NotReady because no CNI installed yet.
After that I can uncomment kubernetes resources, run pulumi up again and it will successfully deploy Calico (after which my master node becomes Ready).

I cannot switch to using Helm Release, because it also doesn't work well - when I try to pulumi destroy resources, the command gets stuck on deleting the Release.

Contributing

Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).

@JinLisek JinLisek added kind/bug Some behavior is incorrect or out of spec needs-triage Needs attention from the triage team labels May 1, 2024
@blampe blampe added area/helm and removed needs-triage Needs attention from the triage team labels May 3, 2024
blampe added a commit that referenced this issue May 10, 2024
We currently fail to render a preview for Chart V3 if the cluster is
unreachable.

Instead of failing, we can emit a warning since Helm is still able to
generate the template without the version set.

Alternatively, we could check `k.clusterUnreachable` as part of `Invoke`
but we wouldn't be able to return a rich preview.

Added a failing E2E test.

Fixes #2985.
@pulumi-bot pulumi-bot added the resolution/fixed This issue was fixed label May 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/helm kind/bug Some behavior is incorrect or out of spec resolution/fixed This issue was fixed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants