Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix karmadactl init not found v1alpha1.cluster.karmada.io object #1207

Merged
merged 1 commit into from
Jan 4, 2022

Conversation

prodanlabs
Copy link
Member

@prodanlabs prodanlabs commented Jan 2, 2022

Signed-off-by: prodan pengshihaoren@gmail.com

What type of PR is this?
/kind bug

What this PR does / why we need it:

Due to the change of crds, the method of karmadactl init to initialize aAPIService resource object should be create instead of update

Which issue(s) this PR fixes:
Fixes #1204

Special notes for your reviewer:

init logs

# ./kubectl-karmada init --crds https://github.com/karmada-io/karmada/releases/download/v1.0.0/crds.tar.gz
I0103 01:11:44.866334  951612 deploy.go:104] kubeconfig file: /root/.kube/config, kubernetes: https://172.31.6.145:6443
W0103 01:11:44.881014  951612 node.go:30] the kubernetes cluster does not have a Master role.
I0103 01:11:44.881037  951612 node.go:38] randomly select 3 Node IPs in the kubernetes cluster.
I0103 01:11:44.883599  951612 deploy.go:124] karmada apiserver ip: [172.31.6.145]
I0103 01:11:45.185244  951612 cert.go:230] Generate ca certificate success.
I0103 01:11:45.354512  951612 cert.go:230] Generate etcd-server certificate success.
I0103 01:11:45.670921  951612 cert.go:230] Generate etcd-client certificate success.
I0103 01:11:45.783126  951612 cert.go:230] Generate karmada certificate success.
I0103 01:11:45.948630  951612 cert.go:230] Generate front-proxy-ca certificate success.
I0103 01:11:46.062963  951612 cert.go:230] Generate front-proxy-client certificate success.
I0103 01:11:46.063120  951612 deploy.go:201] download crds file name: /etc/karmada/crds.tar.gz
Downloading...[ 100.00% ]
Downloading...[ 100.00% ]
Download complete.I0103 01:11:46.998075  951612 deploy.go:390] Create karmada kubeconfig success.
I0103 01:11:47.012133  951612 namespace.go:36] Create Namespace 'karmada-system' successfully.
W0103 01:11:47.066586  951612 rbac.go:78] ClusterRole kube-controller-manager already exists.
I0103 01:11:47.203341  951612 secret.go:78] secret kubeconfig Create successfully.
I0103 01:11:47.603973  951612 secret.go:78] secret etcd-cert Create successfully.
I0103 01:11:48.004491  951612 secret.go:78] secret karmada-cert Create successfully.
I0103 01:11:48.403424  951612 secret.go:78] secret karmada-webhook-cert Create successfully.
I0103 01:11:48.814552  951612 services.go:66] service etcd create successfully.
I0103 01:11:48.814669  951612 deploy.go:266] create etcd StatefulSets
I0103 01:11:49.208593  951612 check.go:98] etcd desired replicaset is 1, currently: 1
I0103 01:11:52.219175  951612 check.go:49] pod: etcd-0 is ready. status: Running
I0103 01:11:52.219208  951612 deploy.go:277] create karmada ApiServer Deployment
I0103 01:11:52.229360  951612 services.go:66] service karmada-apiserver create successfully.
W0103 01:11:55.256684  951612 check.go:52] pod: karmada-apiserver-6dc4cf6964-9xknf not ready. status: Running
W0103 01:11:56.261337  951612 check.go:52] pod: karmada-apiserver-6dc4cf6964-9xknf not ready. status: Running
W0103 01:11:57.260804  951612 check.go:52] pod: karmada-apiserver-6dc4cf6964-9xknf not ready. status: Running
W0103 01:11:58.260815  951612 check.go:52] pod: karmada-apiserver-6dc4cf6964-9xknf not ready. status: Running
W0103 01:11:59.260329  951612 check.go:52] pod: karmada-apiserver-6dc4cf6964-9xknf not ready. status: Running
W0103 01:12:00.260240  951612 check.go:52] pod: karmada-apiserver-6dc4cf6964-9xknf not ready. status: Running
W0103 01:12:01.260219  951612 check.go:52] pod: karmada-apiserver-6dc4cf6964-9xknf not ready. status: Running
W0103 01:12:02.260535  951612 check.go:52] pod: karmada-apiserver-6dc4cf6964-9xknf not ready. status: Running
W0103 01:12:03.261403  951612 check.go:52] pod: karmada-apiserver-6dc4cf6964-9xknf not ready. status: Running
W0103 01:12:04.260827  951612 check.go:52] pod: karmada-apiserver-6dc4cf6964-9xknf not ready. status: Running
W0103 01:12:05.260899  951612 check.go:52] pod: karmada-apiserver-6dc4cf6964-9xknf not ready. status: Running
W0103 01:12:06.260149  951612 check.go:52] pod: karmada-apiserver-6dc4cf6964-9xknf not ready. status: Running
W0103 01:12:07.261060  951612 check.go:52] pod: karmada-apiserver-6dc4cf6964-9xknf not ready. status: Running
W0103 01:12:08.260221  951612 check.go:52] pod: karmada-apiserver-6dc4cf6964-9xknf not ready. status: Running
W0103 01:12:09.260307  951612 check.go:52] pod: karmada-apiserver-6dc4cf6964-9xknf not ready. status: Running
W0103 01:12:10.260549  951612 check.go:52] pod: karmada-apiserver-6dc4cf6964-9xknf not ready. status: Running
W0103 01:12:11.261026  951612 check.go:52] pod: karmada-apiserver-6dc4cf6964-9xknf not ready. status: Running
W0103 01:12:12.259531  951612 check.go:52] pod: karmada-apiserver-6dc4cf6964-9xknf not ready. status: Running
W0103 01:12:13.262038  951612 check.go:52] pod: karmada-apiserver-6dc4cf6964-9xknf not ready. status: Running
W0103 01:12:14.259902  951612 check.go:52] pod: karmada-apiserver-6dc4cf6964-9xknf not ready. status: Running
W0103 01:12:15.261712  951612 check.go:52] pod: karmada-apiserver-6dc4cf6964-9xknf not ready. status: Running
W0103 01:12:16.260044  951612 check.go:52] pod: karmada-apiserver-6dc4cf6964-9xknf not ready. status: Running
W0103 01:12:17.260957  951612 check.go:52] pod: karmada-apiserver-6dc4cf6964-9xknf not ready. status: Running
W0103 01:12:18.260356  951612 check.go:52] pod: karmada-apiserver-6dc4cf6964-9xknf not ready. status: Running
W0103 01:12:19.260911  951612 check.go:52] pod: karmada-apiserver-6dc4cf6964-9xknf not ready. status: Running
W0103 01:12:20.260165  951612 check.go:52] pod: karmada-apiserver-6dc4cf6964-9xknf not ready. status: Running
W0103 01:12:21.261196  951612 check.go:52] pod: karmada-apiserver-6dc4cf6964-9xknf not ready. status: Running
W0103 01:12:22.259369  951612 check.go:52] pod: karmada-apiserver-6dc4cf6964-9xknf not ready. status: Running
I0103 01:12:23.261327  951612 check.go:49] pod: karmada-apiserver-6dc4cf6964-9xknf is ready. status: Running
I0103 01:12:23.279365  951612 deploy.go:60] Initialize karmada bases crd resource `/etc/karmada/crds/bases`
I0103 01:12:23.550618  951612 deploy.go:71] Initialize karmada patches crd resource `/etc/karmada/crds/patches`
I0103 01:12:23.779287  951612 deploy.go:83] Crate MutatingWebhookConfiguration mutating-config.
I0103 01:12:23.809713  951612 deploy.go:87] Crate ValidatingWebhookConfiguration validating-config.
I0103 01:12:23.844421  951612 deploy.go:235] Create APIService 'v1alpha1.cluster.karmada.io'
I0103 01:12:23.852888  951612 deploy.go:297] create karmada kube controller manager Deployment
I0103 01:12:23.869400  951612 services.go:66] service kube-controller-manager create successfully.
I0103 01:12:26.924394  951612 check.go:49] pod: kube-controller-manager-85c789dcfc-jd5cq is ready. status: Running
I0103 01:12:26.924432  951612 deploy.go:310] create karmada scheduler Deployment
I0103 01:12:29.936463  951612 check.go:49] pod: karmada-scheduler-7b9d8b5764-mwzxn is ready. status: Running
I0103 01:12:29.936499  951612 deploy.go:320] create karmada controller manager Deployment
I0103 01:12:32.956276  951612 check.go:49] pod: karmada-controller-manager-556cf896bc-h2589 is ready. status: Running
I0103 01:12:32.956308  951612 deploy.go:330] create karmada webhook Deployment
I0103 01:12:32.968103  951612 services.go:66] service karmada-webhook create successfully.
I0103 01:12:35.996626  951612 check.go:49] pod: karmada-webhook-7cf7986866-bshcl is ready. status: Running
I0103 01:12:35.996657  951612 deploy.go:342] create karmada aggregated apiserver Deployment
I0103 01:12:36.005855  951612 services.go:66] service karmada-aggregated-apiserver create successfully.
I0103 01:12:39.029236  951612 check.go:49] pod: karmada-aggregated-apiserver-84b45bf9b-87vdf is ready. status: Running

------------------------------------------------------------------------------------------------------
 █████   ████   █████████   ███████████   ██████   ██████   █████████   ██████████     █████████
░░███   ███░   ███░░░░░███ ░░███░░░░░███ ░░██████ ██████   ███░░░░░███ ░░███░░░░███   ███░░░░░███
 ░███  ███    ░███    ░███  ░███    ░███  ░███░█████░███  ░███    ░███  ░███   ░░███ ░███    ░███
 ░███████     ░███████████  ░██████████   ░███░░███ ░███  ░███████████  ░███    ░███ ░███████████
 ░███░░███    ░███░░░░░███  ░███░░░░░███  ░███ ░░░  ░███  ░███░░░░░███  ░███    ░███ ░███░░░░░███
 ░███ ░░███   ░███    ░███  ░███    ░███  ░███      ░███  ░███    ░███  ░███    ███  ░███    ░███
 █████ ░░████ █████   █████ █████   █████ █████     █████ █████   █████ ██████████   █████   █████
░░░░░   ░░░░ ░░░░░   ░░░░░ ░░░░░   ░░░░░ ░░░░░     ░░░░░ ░░░░░   ░░░░░ ░░░░░░░░░░   ░░░░░   ░░░░░
------------------------------------------------------------------------------------------------------
Karmada is installed successfully.

Register Kubernetes cluster to Karmada control plane.

Register cluster with 'Push' mode
                                                                                                                                                                             
Step 1: Use kubectl karmada join to register the cluster to Karmada control panel. --cluster-kubeconfig is members kubeconfig.
(In karmada)~# MEMBER_CLUSTER_NAME=`cat ~/.kube/config  | grep current-context | sed 's/: /\n/g'| sed '1d'`
(In karmada)~# kubectl karmada --kubeconfig /etc/karmada/karmada-apiserver.config  join ${MEMBER_CLUSTER_NAME} --cluster-kubeconfig=$HOME/.kube/config

Step 2: Show members of karmada
(In karmada)~# kubectl  --kubeconfig /etc/karmada/karmada-apiserver.config get clusters


Register cluster with 'Pull' mode

Step 1:  Send karmada kubeconfig and karmada-agent.yaml to member kubernetes
(In karmada)~# scp /etc/karmada/karmada-apiserver.config /etc/karmada/karmada-agent.yaml {member kubernetes}:~
                                                                                                                                                                             
Step 2:  Create karmada kubeconfig secret
 Notice:
   Cross-network, need to change the config server address.
(In member kubernetes)~#  kubectl create ns karmada-system
(In member kubernetes)~#  kubectl create secret generic karmada-kubeconfig --from-file=karmada-kubeconfig=/root/karmada-apiserver.config  -n karmada-system                  

Step 3: Create karmada agent
(In member kubernetes)~#  MEMBER_CLUSTER_NAME="demo"
(In member kubernetes)~#  sed -i "s/{member_cluster_name}/${MEMBER_CLUSTER_NAME}/g" karmada-agent.yaml
(In member kubernetes)~#  kubectl create -f karmada-agent.yaml
                                                                                                                                                                             
Step 4: Show members of karmada                                                                                                                                              
(In karmada)~# kubectl  --kubeconfig /etc/karmada/karmada-apiserver.config get clusters

test

# kubectl get po -n karmada-system 
NAME                                                   READY   STATUS    RESTARTS   AGE
etcd-0                                                 1/1     Running   0          63m
karmada-aggregated-apiserver-84b45bf9b-87vdf           1/1     Running   0          62m
karmada-apiserver-6dc4cf6964-9xknf                     1/1     Running   0          63m
karmada-controller-manager-556cf896bc-h2589            1/1     Running   0          62m
karmada-scheduler-7b9d8b5764-mwzxn                     1/1     Running   0          62m
karmada-scheduler-estimator-member1-696b54fd56-zvbks   1/1     Running   0          31m
karmada-webhook-7cf7986866-bshcl                       1/1     Running   0          62m
kube-controller-manager-85c789dcfc-jd5cq               1/1     Running   0          62m


# kubectl  --kubeconfig /etc/karmada/karmada-apiserver.config  get cluster
NAME      VERSION   MODE   READY   AGE
member1   v1.22.3   Push   True    36m
# cat cluster-proxy-rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: cluster-proxy-clusterrole
rules:
- apiGroups:
  - 'cluster.karmada.io'
  resources:
  - clusters/proxy
  resourceNames:
  - member1
  verbs:
  - '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cluster-proxy-clusterrolebinding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-proxy-clusterrole
subjects:
  - kind: User
    name: "system:admin"
# kubectl  --kubeconfig /etc/karmada/karmada-apiserver.config create  -f cluster-proxy-rbac.yaml
clusterrole.rbac.authorization.k8s.io/cluster-proxy-clusterrole created
clusterrolebinding.rbac.authorization.k8s.io/cluster-proxy-clusterrolebinding created

# kubectl  --kubeconfig /etc/karmada/karmada-apiserver.config  get --raw /apis/cluster.karmada.io/v1alpha1/clusters/member1/proxy/api/v1/nodes | jq .
{
  "kind": "NodeList",
  "apiVersion": "v1",
  "metadata": {
    "resourceVersion": "6510607"
  },
  "items": [
    {
      "metadata": {
        "name": "dev-k8s-master01",
        "uid": "10134124-e1a3-44ce-b81a-3bc8f0709756",
        "resourceVersion": "6510549",
        "creationTimestamp": "2021-11-02T03:47:27Z",
        "labels": {
          "beta.kubernetes.io/arch": "amd64",
          "beta.kubernetes.io/os": "linux",
          "kubernetes.io/arch": "amd64",
          "kubernetes.io/hostname": "dev-k8s-master01",
          "kubernetes.io/os": "linux"
        },
        "annotations": {
          "node.alpha.kubernetes.io/ttl": "0",
          "projectcalico.org/IPv4Address": "172.31.6.145/20",
          "projectcalico.org/IPv4IPIPTunnelAddr": "192.168.159.0",
          "volumes.kubernetes.io/controller-managed-attach-detach": "true"
        },
        "managedFields": [
          {
            "manager": "kubelet",
            "operation": "Update",
            "apiVersion": "v1",
            "time": "2021-11-02T03:47:27Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {
              "f:metadata": {
                "f:annotations": {
                  ".": {},
                  "f:volumes.kubernetes.io/controller-managed-attach-detach": {}
                },
                "f:labels": {
                  ".": {},
                  "f:beta.kubernetes.io/arch": {},
                  "f:beta.kubernetes.io/os": {},
                  "f:kubernetes.io/arch": {},
                  "f:kubernetes.io/hostname": {},
                  "f:kubernetes.io/os": {}
                }
              }
            }
          },
          {
            "manager": "calico-node",
            "operation": "Update",
            "apiVersion": "v1",
            "time": "2021-11-02T03:50:59Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {
              "f:metadata": {
                "f:annotations": {
                  "f:projectcalico.org/IPv4Address": {},
                  "f:projectcalico.org/IPv4IPIPTunnelAddr": {}
                }
              },
              "f:status": {
                "f:conditions": {
                  "k:{\"type\":\"NetworkUnavailable\"}": {
                    ".": {},
                    "f:lastHeartbeatTime": {},
                    "f:lastTransitionTime": {},
                    "f:message": {},
                    "f:reason": {},
                    "f:status": {},
                    "f:type": {}
                  }
                }
              }
            },
            "subresource": "status"
          },
          {
            "manager": "kube-controller-manager",
            "operation": "Update",
            "apiVersion": "v1",
            "time": "2021-12-23T04:57:01Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {
              "f:metadata": {
                "f:annotations": {
                  "f:node.alpha.kubernetes.io/ttl": {}
                }
              }
            }
          },
          {
            "manager": "kubelet",
            "operation": "Update",
            "apiVersion": "v1",
            "time": "2021-12-23T07:13:04Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {
              "f:status": {
                "f:conditions": {
                  "k:{\"type\":\"DiskPressure\"}": {
                    "f:lastHeartbeatTime": {}
                  },
                  "k:{\"type\":\"MemoryPressure\"}": {
                    "f:lastHeartbeatTime": {}
                  },
                  "k:{\"type\":\"PIDPressure\"}": {
                    "f:lastHeartbeatTime": {}
                  },
                  "k:{\"type\":\"Ready\"}": {
                    "f:lastHeartbeatTime": {},
                    "f:lastTransitionTime": {},
                    "f:message": {},
                    "f:reason": {},
                    "f:status": {}
                  }
                },
                "f:images": {}
              }
            },
            "subresource": "status"
          }
        ]
      },
      "spec": {},
      "status": {
        "capacity": {
          "cpu": "2",
          "ephemeral-storage": "20509308Ki",
          "hugepages-1Gi": "0",
          "hugepages-2Mi": "0",
          "memory": "8153188Ki",
          "pods": "220"
        },
        "allocatable": {
          "cpu": "2",
          "ephemeral-storage": "20509308Ki",
          "hugepages-1Gi": "0",
          "hugepages-2Mi": "0",
          "memory": "7924400947200m",
          "pods": "220"
        },
        "conditions": [
          {
            "type": "NetworkUnavailable",
            "status": "False",
            "lastHeartbeatTime": "2021-11-02T03:50:59Z",
            "lastTransitionTime": "2021-11-02T03:50:59Z",
            "reason": "CalicoIsUp",
            "message": "Calico is running on this node"
          },
          {
            "type": "MemoryPressure",
            "status": "False",
            "lastHeartbeatTime": "2022-01-02T17:51:32Z",
            "lastTransitionTime": "2021-11-02T03:47:27Z",
            "reason": "KubeletHasSufficientMemory",
            "message": "kubelet has sufficient memory available"
          },
          {
            "type": "DiskPressure",
            "status": "False",
            "lastHeartbeatTime": "2022-01-02T17:51:32Z",
            "lastTransitionTime": "2021-11-02T03:47:27Z",
            "reason": "KubeletHasNoDiskPressure",
            "message": "kubelet has no disk pressure"
          },
          {
            "type": "PIDPressure",
            "status": "False",
            "lastHeartbeatTime": "2022-01-02T17:51:32Z",
            "lastTransitionTime": "2021-11-02T03:47:27Z",
            "reason": "KubeletHasSufficientPID",
            "message": "kubelet has sufficient PID available"
          },
          {
            "type": "Ready",
            "status": "True",
            "lastHeartbeatTime": "2022-01-02T17:51:32Z",
            "lastTransitionTime": "2021-11-02T03:50:37Z",
            "reason": "KubeletReady",
            "message": "kubelet is posting ready status. AppArmor enabled"
          }
        ],
        "addresses": [
          {
            "type": "InternalIP",
            "address": "172.31.6.145"
          },
          {
            "type": "Hostname",
            "address": "dev-k8s-master01"
          }
        ],
        "daemonEndpoints": {
          "kubeletEndpoint": {
            "Port": 10250
          }
        },
        "nodeInfo": {
          "machineID": "20210922095802902033212080128428",
          "systemUUID": "07029b5b-b03b-47d6-9d89-4773054d0f80",
          "bootID": "75d7fe2d-af24-44e1-88ff-8a9d142ebdf3",
          "kernelVersion": "5.4.0-86-generic",
          "osImage": "Ubuntu 20.04.3 LTS",
          "containerRuntimeVersion": "containerd://1.5.2",
          "kubeletVersion": "v1.22.3",
          "kubeProxyVersion": "v1.22.3",
          "operatingSystem": "linux",
          "architecture": "amd64"
        },
        "images": [
          {
            "names": [
              "k8s.gcr.io/etcd@sha256:64b9ea357325d5db9f8a723dcf503b5a449177b17ac87d69481e126bb724c263",
              "k8s.gcr.io/etcd:3.5.1-0"
            ],
            "sizeBytes": 98888614
          },
          {
            "names": [
              "k8s.gcr.io/kube-apiserver@sha256:de881fa0e51e86be2ac97991b3b30fabf3f7310e97c112202b0734263e3f6636",
              "k8s.gcr.io/kube-apiserver:v1.21.7"
            ],
            "sizeBytes": 30458540
          },
          {
            "names": [
              "swr.ap-southeast-1.myhuaweicloud.com/karmada/karmada-aggregated-apiserver@sha256:e8f70396516adc3917f92a8b3a79473fb9a5ecacb4fdfcf7f19947a7ff6c5707",
              "swr.ap-southeast-1.myhuaweicloud.com/karmada/karmada-aggregated-apiserver:latest"
            ],
            "sizeBytes": 30378204
          },
          {
            "names": [
              "swr.ap-southeast-1.myhuaweicloud.com/karmada/karmada-aggregated-apiserver@sha256:5f8324e4ea9b10faac72ef3943af033fada00f96062cc58f9ce44a1f04cea942"
            ],
            "sizeBytes": 30354961
          },
          {
            "names": [
              "swr.ap-southeast-1.myhuaweicloud.com/karmada/karmada-controller-manager@sha256:4f399ac7d2bc13d1d62ea216dd9f6975d823867d4d9374b0f86ff9c9731c73cb",
              "swr.ap-southeast-1.myhuaweicloud.com/karmada/karmada-controller-manager:latest"
            ],
            "sizeBytes": 30002117
          },
          {
            "names": [
              "swr.ap-southeast-1.myhuaweicloud.com/karmada/karmada-controller-manager@sha256:6632ed734184e3b0eb0b9906ad6166b9b3f56a4720a2cb8a7765c574699d1e1a"
            ],
            "sizeBytes": 29981711
          },
          {
            "names": [
              "k8s.gcr.io/kube-controller-manager@sha256:d05d7e5c66628a5c2311836387f50756debf0c416ee99e24f111539b55988d1a",
              "k8s.gcr.io/kube-controller-manager:v1.21.7"
            ],
            "sizeBytes": 29446242
          },
          {
            "names": [
              "swr.ap-southeast-1.myhuaweicloud.com/karmada/karmada-scheduler-estimator@sha256:577a5b371f049d3ba41fb83590d45ec1b3df2268945989b5249cfb773d18a32b",
              "swr.ap-southeast-1.myhuaweicloud.com/karmada/karmada-scheduler-estimator:latest"
            ],
            "sizeBytes": 26734749
          },
          {
            "names": [
              "swr.ap-southeast-1.myhuaweicloud.com/karmada/karmada-scheduler@sha256:a640ee7168ad416f0e398e90ef5f2813e9963d257ed7b9b25dc3271207b778ed"
            ],
            "sizeBytes": 26710840
          },
          {
            "names": [
              "swr.ap-southeast-1.myhuaweicloud.com/karmada/karmada-scheduler@sha256:ea95659cb19d16fae4cda4e1b5c1e215eee5442c17c34c72a62bb68f056a7b40",
              "swr.ap-southeast-1.myhuaweicloud.com/karmada/karmada-scheduler:latest"
            ],
            "sizeBytes": 26710046
          },
          {
            "names": [
              "swr.ap-southeast-1.myhuaweicloud.com/karmada/karmada-webhook@sha256:998a6e390c3b24483df30f9c7bc8f06a56d7206e2e862261a6b23f1114da7e99"
            ],
            "sizeBytes": 25545092
          },
          {
            "names": [
              "swr.ap-southeast-1.myhuaweicloud.com/karmada/karmada-webhook@sha256:96f78f773c40b32c299553f22cbb45535d1dc378dac8bc9e2700d7d73080004e",
              "swr.ap-southeast-1.myhuaweicloud.com/karmada/karmada-webhook:latest"
            ],
            "sizeBytes": 25543866
          },
          {
            "names": [
              "docker.io/library/nginx@sha256:bfe377bdeb9ff37a62b49e149ac12c67a18089699bb844ce917fe3dbb834abed",
              "docker.io/library/nginx:1.21.1-alpine"
            ],
            "sizeBytes": 9935133
          },
          {
            "names": [
              "docker.io/library/alpine@sha256:635f0aa53d99017b38d1a0aa5b2082f7812b03e3cdb299103fe77b5c8a07f1d2",
              "docker.io/library/alpine:3.14.3"
            ],
            "sizeBytes": 2826618
          },
          {
            "names": [
              "k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07",
              "k8s.gcr.io/pause:3.5"
            ],
            "sizeBytes": 301416
          }
        ]
      }
    }
  ]
}

Does this PR introduce a user-facing change?:


Signed-off-by: prodan <pengshihaoren@gmail.com>
@karmada-bot karmada-bot added the kind/bug Categorizes issue or PR as related to a bug. label Jan 2, 2022
@karmada-bot karmada-bot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Jan 2, 2022
Copy link
Member

@RainbowMango RainbowMango left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
/approve

@karmada-bot karmada-bot added the lgtm Indicates that a PR is ready to be merged. label Jan 4, 2022
@karmada-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: RainbowMango

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@karmada-bot karmada-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 4, 2022
@karmada-bot karmada-bot merged commit 5548f87 into karmada-io:master Jan 4, 2022
karmada-bot added a commit that referenced this pull request Jan 10, 2022
…1207-upstream-release-1.0

Automated cherry pick of #1207: Fix karmadactl init not found v1alpha1.cluster.karmada.io
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. kind/bug Categorizes issue or PR as related to a bug. lgtm Indicates that a PR is ready to be merged. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Use 'karmadactl init', "v1alpha1.cluster.karmada.io" not found
3 participants