Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl-karmada init failed but return success #1754

Closed
wuyingjun-lucky opened this issue May 9, 2022 · 7 comments
Closed

kubectl-karmada init failed but return success #1754

wuyingjun-lucky opened this issue May 9, 2022 · 7 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@wuyingjun-lucky
Copy link
Member

What happened:
Use kubectl init to deploy karmada-system. The control-plane deployed failed, But the console output successful messages
image
image

I found the source just use warn to log the result.
image

@RainbowMango @lonelyCZ can you help me check the bug ?
What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Karmada version:
  • kubectl-karmada or karmadactl version (the result of kubectl-karmada version or karmadactl version):
  • Others:
@wuyingjun-lucky wuyingjun-lucky added the kind/bug Categorizes issue or PR as related to a bug. label May 9, 2022
@wuyingjun-lucky wuyingjun-lucky changed the title kubectl-karmada init kubectl-karmada init failed but return success May 9, 2022
@lonelyCZ
Copy link
Member

lonelyCZ commented May 9, 2022

Yes, I just found it when I tested --context at #1748 .

Then I found that I set image name by mistake, the pod didn't run, but to print success information.

I found the source just use warn to log the result.

Yes, I guess this is the reason.

But I found that if we failed to deploy midway and had to manually clean up the environment and deploy again, it was a bit too much hassle. Should we think about the atomicity of the operation?

@wuyingjun-lucky
Copy link
Member Author

agree with you

@chaunceyjiang
Copy link
Member

But I found that if we failed to deploy midway and had to manually clean up the environment and deploy again, it was a bit too much hassle. Should we think about the atomicity of the operation?

Maybe we can refer to kubeadm --reset

@lonelyCZ
Copy link
Member

lonelyCZ commented May 9, 2022

Right, expect for #1337

@wuyingjun-lucky
Copy link
Member Author

Right, expect for #1337

you means we can resovle the question by #1337 done

@RainbowMango
Copy link
Member

/close
I guess this issue has been solved.

@karmada-bot
Copy link
Collaborator

@RainbowMango: Closing this issue.

In response to this:

/close
I guess this issue has been solved.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

5 participants