Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade Version via Helm where install.sh Script Use in Previous Installation #3273

Closed
ikandars opened this issue Oct 6, 2023 · 2 comments
Closed

Comments

@ikandars
Copy link

ikandars commented Oct 6, 2023

Feature request

I use install.sh script to install KubeOVN in version 1.11.0. I want to upgrade my cluster into version 1.11.1. It will fail if I use this helm command:

helm install --debug kubeovn ./kubeovn-helm --set MASTER_NODES=${Node0},

Is there any simple step to upgrade from install.sh script to Helm?

Use case

My quick heck to do that is by using this approach:

First, convert the same version into Halm. In my case, I have to convert from v1.11.0 install.sh script base into helm v1.11.0 base. Before I run helm install command, I need to add labels and annotations to existing KubeOVN objects. I use this script to do that:

#!/bin/bash

declare -a arr=("ServiceAccount ovn" 
"CustomResourceDefinition vpc-dnses.kubeovn.io" 
"CustomResourceDefinition switch-lb-rules.kubeovn.io"
"CustomResourceDefinition vpc-nat-gateways.kubeovn.io"
"CustomResourceDefinition iptables-eips.kubeovn.io"
"CustomResourceDefinition iptables-fip-rules.kubeovn.io"
"CustomResourceDefinition iptables-dnat-rules.kubeovn.io"
"CustomResourceDefinition iptables-snat-rules.kubeovn.io"
"CustomResourceDefinition ovn-eips.kubeovn.io"
"CustomResourceDefinition ovn-fips.kubeovn.io"
"CustomResourceDefinition ovn-snat-rules.kubeovn.io"
"CustomResourceDefinition vpcs.kubeovn.io"
"CustomResourceDefinition ips.kubeovn.io"
"CustomResourceDefinition vips.kubeovn.io"
"CustomResourceDefinition subnets.kubeovn.io"
"CustomResourceDefinition vlans.kubeovn.io"
"CustomResourceDefinition provider-networks.kubeovn.io"
"CustomResourceDefinition security-groups.kubeovn.io"
"CustomResourceDefinition htbqoses.kubeovn.io"
"ClusterRole system:ovn"
"ClusterRoleBinding ovn"
"Service kube-ovn-controller"
"Service kube-ovn-monitor"
"Service ovn-nb"
"Service ovn-northd"
"Service kube-ovn-cni"
"Service kube-ovn-pinger"
"Service ovn-sb"
"DaemonSet kube-ovn-cni"
"DaemonSet ovs-ovn"
"DaemonSet kube-ovn-pinger"
"Deployment ovn-central"
"Deployment kube-ovn-controller"
"Deployment kube-ovn-monitor")


for i in "${arr[@]}"
do
   kubectl -n kube-system patch $i --type merge -p '{
    "metadata":{
        "annotations":{
            "meta.helm.sh/release-name": "kubeovn",
            "meta.helm.sh/release-namespace": "default"
        },
        "labels":{
            "app.kubernetes.io/managed-by": "Helm"
        }
    }
}'
done

To get Helm charts, so we need clone the project first.

git clone https://github.com/kubeovn/kube-ovn.git
cd kube-ovn
git checkout v1.11.0

Once it done, then I create a custom helm values.yaml file where the value populated from existing deployment.

replicaCount: 3
MASTER_NODES: "10.255.224.249,10.255.224.246,10.255.224.156"

networking:
  IFACE: "enp1s0"

ipv4:
  POD_CIDR: "172.16.0.0/20"
  POD_GATEWAY: "172.16.0.1"
  SVC_CIDR: "172.16.16.0/20"
  JOIN_CIDR: "100.64.0.0/16"

You can get all those values from kubectl -n kube-system get deploy/kube-ovn-controller -o yaml

kubectl label no -lbeta.kubernetes.io/os=linux kubernetes.io/os=linux --overwrite
kubectl label no -lnode-role.kubernetes.io/control-plane  kube-ovn/role=master --overwrite
kubectl label no -lovn.kubernetes.io/ovs_dp_type!=userspace ovn.kubernetes.io/ovs_dp_type=kernel  --overwrite

helm install --debug kubeovn .kubeovn-helm -f kubeovn-custom-values.yaml

Then check all KubeOVN related pods in running state.

kubectl -n kube-system get pod | grep -i ovn

Now, I run the following steps to do the upgrade via Helm:

git checkout v1.11.1
helm upgrade --debug kubeovn ./kubeovn-helm -f kubeovn-custom-values.yaml

Last, check the KubeOVN related pods status:

kubectl -n kube-system get pod | grep -i ovn

I found that version v1.11.2-v1.11.3 still use container image in version v1.11.1. I put set param in helm upgrade command to correct image tag:

helm upgrade --debug kubeovn ./kubeovn-helm -f kubeovn-custom-values.yaml --set global.images.kubeovn.tag=v1.11.2
Copy link
Contributor

github-actions bot commented Dec 6, 2023

Issues go stale after 60d of inactivity. Please comment or re-open the issue if you are still interested in getting this issue fixed.

@aminmr
Copy link
Contributor

aminmr commented May 11, 2024

Hello @ikandars
I think if your procedure is working fine and has no issues, we can add it to the kube-ovn documentation. We only need the approval of the main contributors @oilbeater

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants