Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The kubeconfig maybe overwritten aboout the create_cluster function in hack/scripts #1545

Closed
Charlie17Li opened this issue Mar 25, 2022 · 16 comments
Labels
kind/question Indicates an issue that is a support question.

Comments

@Charlie17Li
Copy link
Contributor

Charlie17Li commented Mar 25, 2022

Please provide an in-depth description of the question you have:

When I follow the hack/local-up-karmada.sh to create cluster

util::create_cluster "${MEMBER_CLUSTER_1_NAME}" "${MEMBER_CLUSTER_KUBECONFIG}" "${CLUSTER_VERSION}" "${KIND_LOG_FILE}" "${TEMP_PATH}"/member1.yaml
util::create_cluster "${MEMBER_CLUSTER_2_NAME}" "${MEMBER_CLUSTER_KUBECONFIG}" "${CLUSTER_VERSION}" "${KIND_LOG_FILE}" "${TEMP_PATH}"/member2.yaml
util::create_cluster "${PULL_MODE_CLUSTER_NAME}" "${MEMBER_CLUSTER_KUBECONFIG}" "${CLUSTER_VERSION}" "${KIND_LOG_FILE}"

and the create cluster func with 「kind」:

karmada/hack/util.sh

Lines 349 to 361 in a084c8d

function util::create_cluster() {
local cluster_name=${1}
local kubeconfig=${2}
local kind_image=${3}
local log_path=${4}
local cluster_config=${5:-}
mkdir -p ${log_path}
rm -rf "${log_path}/${cluster_name}.log"
rm -f "${kubeconfig}"
nohup kind delete cluster --name="${cluster_name}" >> "${log_path}"/"${cluster_name}".log 2>&1 && kind create cluster --name "${cluster_name}" --kubeconfig="${kubeconfig}" --image="${kind_image}" --config="${cluster_config}" >> "${log_path}"/"${cluster_name}".log 2>&1 &
echo "Creating cluster ${cluster_name}"
}

the kubeconfig is deleted several times

What do you think about this question?:

If member1 is already created when member3 is ready to be created, the configuration for member1 is deleted.

Then the func will be failed

util::wait_for_condition 'ok' "kubectl --kubeconfig ${kubeconfig_path} --context ${context_name} get --raw=/healthz &> /dev/null" 300

I think there must be an func to init the environment, if necessary, I can try edit the script

Environment:

  • Karmada version:
  • Kubernetes version:
  • Others:
@Charlie17Li Charlie17Li added the kind/question Indicates an issue that is a support question. label Mar 25, 2022
@RainbowMango
Copy link
Member

If member1 is already created when member3 is ready to be created, the configuration for member1 is deleted.

My question is menber1 and member3 using different kubeconfig, sorry I didn't get you.

@Charlie17Li
Copy link
Contributor Author

If member1 is already created when member3 is ready to be created, the configuration for member1 is deleted.

My question is menber1 and member3 using different kubeconfig, sorry I didn't get you.

but there is the same ENV enviroment shared in the local_up_karmada.sh when create the member1 member2 member3: ${MEMBER_CLUSTER_KUBECONFIG}

@RainbowMango
Copy link
Member

Yeah, I get what you mean.

Given the MEMBER_CLUSTER_KUBECONFIG defaults to ${HOME}/.kube/members.config, we creating the cluster by order:

  1. Creating member1 --> create the ${HOME}/.kube/members.config
  2. Creating member2 --> remove and recreate ${HOME}/.kube/members.config
  3. Creating member3 --> remove and recreate ${HOME}/.kube/members.config

It's weird that I can see the 3 clusters config in the final ${HOME}/.kube/members.config, do you know why?

@Charlie17Li
Copy link
Contributor Author

In the normal situation:

image

but I don't konw if the follow situation exist
image

@RainbowMango
Copy link
Member

Yeah, you are right!!!

Any idea about how to fix it?

@RainbowMango
Copy link
Member

Do you think this works?

diff --git a/hack/util.sh b/hack/util.sh
index a3e29f3c..95f752c2 100755
--- a/hack/util.sh
+++ b/hack/util.sh
@@ -356,6 +356,9 @@ function util::create_cluster() {
   mkdir -p ${log_path}
   rm -rf "${log_path}/${cluster_name}.log"
   rm -f "${kubeconfig}"
+  kubectl config --kubeconfig="${kubeconfig}" delete-cluster="${cluster_name}"
+  kubectl config --kubeconfig="${kubeconfig}" delete-context="${cluster_name}"
+  kubectl config --kubeconfig="${kubeconfig}" delete-context="kind-${cluster_name}"
   nohup kind delete cluster --name="${cluster_name}" >> "${log_path}"/"${cluster_name}".log 2>&1 && kind create cluster --name "${cluster_name}" --kubeconfig="${kubeconfig}" --image="${kind_image}" --config="${cluster_config}" >> "${log_path}"/"${cluster_name}".log
 2>&1 &
   echo "Creating cluster ${cluster_name}"
 }

Or

--- a/hack/util.sh
+++ b/hack/util.sh
@@ -355,8 +355,7 @@ function util::create_cluster() {

   mkdir -p ${log_path}
   rm -rf "${log_path}/${cluster_name}.log"
-  rm -f "${kubeconfig}"
-  nohup kind delete cluster --name="${cluster_name}" >> "${log_path}"/"${cluster_name}".log 2>&1 && kind create cluster --name "${cluster_name}" --kubeconfig="${kubeconfig}" --image="${kind_image}" --config="${cluster_config}" >> "${log_path}"/"${cluster_name}".log
 2>&1 &
+  nohup kind delete cluster --name="${cluster_name}" --kubeconfig="$kubeconfig" >> "${log_path}"/"${cluster_name}".log 2>&1 && kind create cluster --name "${cluster_name}" --kubeconfig="${kubeconfig}" --image="${kind_image}" --config="${cluster_config}" >> "${log_p
ath}"/"${cluster_name}".log 2>&1 &
   echo "Creating cluster ${cluster_name}"
 }

@Charlie17Li
Copy link
Contributor Author

Charlie17Li commented Mar 26, 2022

I think both work, perhaps the second is more elegant, since the OS can lock fd (file descriptor) and 「kind」can remove the specified cluster

@RainbowMango
Copy link
Member

Yes, I prefer the second one too. Would you like to send a PR for this? @Charlie17Li

@GitHubxsy
Copy link
Contributor

@RainbowMango Does this slow down local-up-karmada.sh?

@RainbowMango
Copy link
Member

@RainbowMango Does this slow down local-up-karmada.sh?

I don't think so.

@XiShanYongYe-Chang
Copy link
Member

/cc @chaosi-zju
Would you like to take a look at this issue? If you feel it is unnecessary, you can close it.

@chaosi-zju
Copy link
Member

@XiShanYongYe-Chang hello, the issue is over time, actually, the problem has already been fixed by my previous PR #3682

since we now use different kubeconfig path when creating cluster, no longer has conflict~

util::create_cluster "${MEMBER_CLUSTER_1_NAME}" "${MEMBER_CLUSTER_1_TMP_CONFIG}" "${CLUSTER_VERSION}" "${KIND_LOG_FILE}" "${TEMP_PATH}"/member1.yaml
util::create_cluster "${MEMBER_CLUSTER_2_NAME}" "${MEMBER_CLUSTER_2_TMP_CONFIG}" "${CLUSTER_VERSION}" "${KIND_LOG_FILE}" "${TEMP_PATH}"/member2.yaml
util::create_cluster "${PULL_MODE_CLUSTER_NAME}" "${PULL_MODE_CLUSTER_TMP_CONFIG}" "${CLUSTER_VERSION}" "${KIND_LOG_FILE}"

@chaosi-zju
Copy link
Member

/close

@karmada-bot
Copy link
Collaborator

@chaosi-zju: You can't close an active issue/PR unless you authored it or you are a collaborator.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@XiShanYongYe-Chang
Copy link
Member

/close

@karmada-bot
Copy link
Collaborator

@XiShanYongYe-Chang: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/question Indicates an issue that is a support question.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants