Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: after cluster recovery kubeproxy pod pengding #235

Merged
merged 2 commits into from
Sep 26, 2022
Merged

fix: after cluster recovery kubeproxy pod pengding #235

merged 2 commits into from
Sep 26, 2022

Conversation

zhuzhenfan
Copy link
Contributor

Signed-off-by: zhuzhenfan 981189503@qq.com

What type of PR is this?

/kind bug

What this PR does / why we need it:

after cluster recovery kubeproxy pod pengding

Which issue(s) this PR fixes:

Fixes #234

Special notes for reviewers:


Does this PR introduced a user-facing change?

None

Additional documentation, usage docs, etc.:


@x893675 @Metrora

Signed-off-by: zhuzhenfan <981189503@qq.com>
@kubeclipper-bot kubeclipper-bot added release-note-none kind/bug Categorizes issue or PR as related to a bug. dco-signoff: yes size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Sep 26, 2022
@codecov-commenter
Copy link

codecov-commenter commented Sep 26, 2022

Codecov Report

Merging #235 (fcccc3f) into master (013c741) will increase coverage by 0.05%.
The diff coverage is 21.73%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #235      +/-   ##
==========================================
+ Coverage   12.16%   12.21%   +0.05%     
==========================================
  Files         105      105              
  Lines       16541    16516      -25     
==========================================
+ Hits         2012     2018       +6     
+ Misses      14290    14260      -30     
+ Partials      239      238       -1     
Impacted Files Coverage Δ
pkg/scheme/core/v1/k8s/cluster.go 0.12% <0.00%> (+<0.01%) ⬆️
pkg/apis/core/v1/utils.go 55.97% <100.00%> (+0.92%) ⬆️
pkg/apis/core/v1/handler.go 0.33% <0.00%> (-0.01%) ⬇️

@@ -186,9 +186,14 @@ func (h *handler) parseRecoverySteps(c *v1.Cluster, b *v1.Backup, restoreDir str

names := make([]string, 0)
ips := make([]string, 0)
var masters []component.Node
var masters, workers []component.Node
for _, node := range nodeList.Items {
if node.Labels[common.LabelNodeRole] != "master" {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use consistent NodeRoleMaster instead str master

Signed-off-by: zhuzhenfan <981189503@qq.com>
@x893675
Copy link
Collaborator

x893675 commented Sep 26, 2022

/lgtm
/approve

@kubeclipper-bot kubeclipper-bot added the lgtm Indicates that a PR is ready to be merged. label Sep 26, 2022
@kubeclipper-bot
Copy link
Collaborator

LGTM label has been added.

Git tree hash: 6165e2a42f397dc42c13c191b80f705b17904dce

@kubeclipper-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: x893675, zhuzhenfan

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@kubeclipper-bot kubeclipper-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 26, 2022
@kubeclipper-bot kubeclipper-bot merged commit ffe3d92 into kubeclipper:master Sep 26, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. dco-signoff: yes kind/bug Categorizes issue or PR as related to a bug. lgtm Indicates that a PR is ready to be merged. release-note-none size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

after cluster recovery kubeproxy pod pengding
4 participants