Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing docs: moving from classic to helm mode #1793

Closed
SemaiCZE opened this issue Jul 2, 2023 · 6 comments
Closed

Missing docs: moving from classic to helm mode #1793

SemaiCZE opened this issue Jul 2, 2023 · 6 comments
Labels
kind/feature New feature or request stale

Comments

@SemaiCZE
Copy link

SemaiCZE commented Jul 2, 2023

Proposal / RFE

Hi, I have a cilium in my k3s cluster and it was installed using cilium-cli (somewhere around v0.13.0 where there was no helm mode available). Now I upgraded to cilium-cli v0.15.0 where the classic mode is deprecated. But using helm mode does not work out of the box:

[root@kube-master1 ~]# KUBECONFIG=/etc/rancher/k3s/k3s.yaml /usr/local/bin/cilium version
cilium-cli: v0.15.0 compiled with go1.20.4 on linux/amd64
cilium image (default): v1.13.4
cilium image (stable): v1.13.4
cilium image (running): unknown. Unable to obtain cilium version, no cilium pods found in namespace "kube-system"
[root@kube-master1 ~]# KUBECONFIG=/etc/rancher/k3s/k3s.yaml /usr/local/bin/cilium upgrade
🔮 Auto-detected Kubernetes kind: K3s
ℹ️  Using Cilium version 1.13.4
🔮 Auto-detected cluster name: default
🔮 Auto-detected datapath mode: tunnel

Error: Unable to upgrade Cilium: "cilium" has no deployed releases
[root@kube-master1 ~]# KUBECONFIG=/etc/rancher/k3s/k3s.yaml /usr/local/bin/cilium status
    /¯¯\
 /¯¯\__/¯¯\    Cilium:             OK
 \__/¯¯\__/    Operator:           OK
 /¯¯\__/¯¯\    Envoy DaemonSet:    disabled (using embedded mode)
 \__/¯¯\__/    Hubble Relay:       OK
    \__/       ClusterMesh:        disabled

DaemonSet              cilium             Desired: 2, Ready: 2/2, Available: 2/2
Deployment             hubble-ui          Desired: 1, Ready: 1/1, Available: 1/1
Deployment             hubble-relay       Desired: 1, Ready: 1/1, Available: 1/1
Deployment             cilium-operator    Desired: 1, Ready: 1/1, Available: 1/1
Containers:            cilium             Running: 2
                       hubble-ui          Running: 1
                       hubble-relay       Running: 1
                       cilium-operator    Running: 1
Cluster Pods:          19/19 managed by Cilium
Helm chart version:    
Image versions         cilium             quay.io/cilium/cilium:v1.13.0: 2
                       hubble-ui          quay.io/cilium/hubble-ui:v0.10.0@sha256:118ad2fcfd07fabcae4dde35ec88d33564c9ca7abe520aa45b1eb13ba36c6e0a: 1
                       hubble-ui          quay.io/cilium/hubble-ui-backend:v0.10.0@sha256:cc5e2730b3be6f117b22176e25875f2308834ced7c3aa34fb598aa87a2c0a6a4: 1
                       hubble-relay       quay.io/cilium/hubble-relay:v1.13.0: 1
                       cilium-operator    quay.io/cilium/operator-generic:v1.13.0: 1

I can't find anywhere what to do to migrate my installation to helm. A I really don't want to destroy and recreate all of my networking.

Is your feature request related to a problem?

Not yet, but with the future release of cilium-cli I won't be able to upgrade my cluster anymore.

Describe the solution you'd like

I'd like description how to migrate from old installation method to the new helm method.

Thanks,
-Petr

@SemaiCZE SemaiCZE added the kind/feature New feature or request label Jul 2, 2023
@SemaiCZE
Copy link
Author

SemaiCZE commented Jul 5, 2023

@michi-covalent
Copy link
Contributor

hi @SemaiCZE 👋

unfortunately cilium-cli currently does not support upgrading classic mode installations to helm mode.

you could continue to use the classic mode upgrade command, but it was never properly implemented either:

func (k *K8sInstaller) Upgrade(ctx context.Context) error {
k.autodetect(ctx)
// no need to determine KPR setting on upgrade, keep the setting configured with the old
// version.
if err := k.detectDatapathMode(false); err != nil {
return err
}
daemonSet, err := k.client.GetDaemonSet(ctx, k.params.Namespace, defaults.AgentDaemonSetName, metav1.GetOptions{})
if err != nil {
return fmt.Errorf("unable to retrieve DaemonSet of cilium-agent: %s", err)
}
deployment, err := k.client.GetDeployment(ctx, k.params.Namespace, defaults.OperatorDeploymentName, metav1.GetOptions{})
if err != nil {
return fmt.Errorf("unable to retrieve Deployment of cilium-operator: %s", err)
}
var patched int
if err = upgradeDeployment(ctx, k, upgradeDeploymentParams{
deployment: deployment,
imageIncludeDigest: k.fqOperatorImage(utils.ImagePathIncludeDigest),
imageExcludeDigest: k.fqOperatorImage(utils.ImagePathExcludeDigest),
containerName: defaults.OperatorContainerName,
}, &patched); err != nil {
return err
}
agentImage := k.fqAgentImage(utils.ImagePathIncludeDigest)
var containerPatches []string
for _, c := range daemonSet.Spec.Template.Spec.Containers {
if c.Image != agentImage {
containerPatches = append(containerPatches, `{"name":"`+c.Name+`", "image":"`+agentImage+`"}`)
}
}
var initContainerPatches []string
for _, c := range daemonSet.Spec.Template.Spec.InitContainers {
if c.Image != agentImage {
initContainerPatches = append(initContainerPatches, `{"name":"`+c.Name+`", "image":"`+agentImage+`"}`)
}
}
if len(containerPatches) == 0 && len(initContainerPatches) == 0 {
k.Log("✅ Cilium is already up to date")
} else {
k.Log("🚀 Upgrading cilium to version %s...", k.fqAgentImage(utils.ImagePathExcludeDigest))
patch := []byte(`{"spec":{"template":{"spec":{"containers":[` + strings.Join(containerPatches, ",") + `], "initContainers":[` + strings.Join(initContainerPatches, ",") + `]}}}}`)
_, err = k.client.PatchDaemonSet(ctx, k.params.Namespace, defaults.AgentDaemonSetName, types.StrategicMergePatchType, patch, metav1.PatchOptions{})
if err != nil {
return fmt.Errorf("unable to patch DaemonSet %s with patch %q: %w", defaults.AgentDaemonSetName, patch, err)
}
patched++
}
hubbleRelayDeployment, err := k.client.GetDeployment(ctx, k.params.Namespace, defaults.RelayDeploymentName, metav1.GetOptions{})
if err != nil && !k8serrors.IsNotFound(err) {
return fmt.Errorf("unable to retrieve Deployment of %s: %w", defaults.RelayDeploymentName, err)
}
if err == nil { // only update if hubble relay deployment was found on the cluster
if err = upgradeDeployment(ctx, k, upgradeDeploymentParams{
deployment: hubbleRelayDeployment,
imageIncludeDigest: k.fqRelayImage(utils.ImagePathIncludeDigest),
imageExcludeDigest: k.fqRelayImage(utils.ImagePathExcludeDigest),
containerName: defaults.RelayContainerName,
}, &patched); err != nil {
return err
}
}
clustermeshAPIServerDeployment, err := k.client.GetDeployment(ctx, k.params.Namespace, defaults.ClusterMeshDeploymentName, metav1.GetOptions{})
if err != nil && !k8serrors.IsNotFound(err) {
return fmt.Errorf("unable to retrieve Deployment of %s: %w", defaults.ClusterMeshDeploymentName, err)
}
if err == nil { // only update clustermesh-apiserver if deployment was found on the cluster
if err = upgradeDeployment(ctx, k, upgradeDeploymentParams{
deployment: clustermeshAPIServerDeployment,
imageIncludeDigest: k.fqClusterMeshAPIImage(utils.ImagePathIncludeDigest),
imageExcludeDigest: k.fqClusterMeshAPIImage(utils.ImagePathExcludeDigest),
containerName: defaults.ClusterMeshContainerName,
}, &patched); err != nil {
return err
}
}
if patched > 0 && k.params.Wait {
k.Log("⌛ Waiting for Cilium to be upgraded...")
collector, err := status.NewK8sStatusCollector(k.client, status.K8sStatusParameters{
Namespace: k.params.Namespace,
Wait: true,
WaitDuration: k.params.WaitDuration,
WarningFreePods: []string{defaults.AgentDaemonSetName, defaults.OperatorDeploymentName, defaults.RelayDeploymentName, defaults.ClusterMeshDeploymentName},
})
if err != nil {
return err
}
s, err := collector.Status(ctx)
if err != nil {
fmt.Print(s.Format())
return err
}
}
return nil
}
it's only updating image tags without updating any other resources. trying to think if there is any way to convert classic mode installations to use the helm mode 🤔

@SemaiCZE
Copy link
Author

SemaiCZE commented Jul 5, 2023

Thanks @michi-covalent,

it's sad that there is no upgrade path. I was feeling lucky today so I prepared a yaml file with helm values the same way I had it in my notes from previous installation (mainly CIDRs for IPv4 and IPv6 and a few more settings) and then uninstall cilium using cli in classic mode and immediatelly after install it again using helm. It worked better than expected 👍.

But some note about this in docs or release notes would be great, even if there is no great solution for people.

P.S.: helm also solved this issue (#1911) I had before 🙂

@michi-covalent
Copy link
Contributor

It worked better than expected 👍.

that's great to hear! yeah i'm preparing release notes in cilium/cilium#26606 regarding the incompatibility between helm and classic modes 📝

Copy link

This issue has been automatically marked as stale because it has not
had recent activity. It will be closed if no further activity occurs.

@github-actions github-actions bot added the stale label Sep 28, 2024
Copy link

This issue has not seen any activity since it was marked stale.
Closing.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Oct 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature New feature or request stale
Projects
None yet
Development

No branches or pull requests

2 participants