Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor cluster controller #3380

Merged
merged 1 commit into from
Feb 25, 2021

Conversation

swiftslee
Copy link
Contributor

Signed-off-by: yuswift yuswiftli@yunify.com

What type of PR is this?
/kind design
What this PR does / why we need it:
Reduce the complexity between tower server and clsuter-controller. Remove the port allocation proxy creation token generation steps, add cluster ready detection step.
Which issue(s) this PR fixes:
Fixes #3234

@ks-ci-bot ks-ci-bot added kind/design Categorizes issue or PR as related to design. dco-signoff: yes size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Feb 23, 2021
@swiftslee
Copy link
Contributor Author

/cc @zryfish

@codecov
Copy link

codecov bot commented Feb 23, 2021

Codecov Report

Merging #3380 (71988d9) into master (5972c4b) will increase coverage by 0.02%.
The diff coverage is 29.60%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #3380      +/-   ##
==========================================
+ Coverage   11.87%   11.89%   +0.02%     
==========================================
  Files         226      226              
  Lines       42658    42605      -53     
==========================================
+ Hits         5065     5068       +3     
+ Misses      36809    36757      -52     
+ Partials      784      780       -4     
Flag Coverage Δ
unittests 11.89% <29.60%> (+0.02%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
pkg/controller/cluster/cluster_controller.go 0.00% <0.00%> (ø)
pkg/controller/network/ippool/ippool_controller.go 55.44% <0.00%> (-0.29%) ⬇️
pkg/models/devops/devops.go 12.24% <0.00%> (ø)
pkg/simple/client/devops/jenkins/pipeline.go 3.14% <0.00%> (ø)
pkg/simple/client/devops/jenkins/pure_request.go 0.00% <0.00%> (ø)
...kg/simple/client/network/ippool/calico/provider.go 7.14% <0.00%> (-0.09%) ⬇️
...er/devopscredential/devopscredential_controller.go 33.10% <44.00%> (+0.03%) ⬆️
pkg/apiserver/request/requestinfo.go 55.30% <100.00%> (+1.75%) ⬆️
pkg/controller/pipeline/pipeline_controller.go 42.65% <100.00%> (-3.75%) ⬇️
pkg/server/params/params.go 78.72% <100.00%> (+26.09%) ⬆️
... and 9 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 5972c4b...194d054. Read the comment docs.

@@ -348,6 +345,80 @@ func (c *clusterController) reconcileHostCluster() error {
return err
}

func (c *clusterController) judgeIfClusterIsReady() error {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how about changing the name to probeClusters ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed.

klog.Error(err)
continue
}
config.Timeout = 10 * time.Second
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if there are lots of clusters, saying 50, and each cluster takes 9s to finish probing, that would be 450s, > resyncPeriod.

Copy link
Contributor Author

@swiftslee swiftslee Feb 23, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

10 seconds seem to be too long. But the case you are talking about is very rare. It's unlikely that every cluster connection takes 9s. In most cases, one connection takes several ms. How about changing the timeout to 3s?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there are network issues on the node where the ks-controller-manager pod residing, it's possible.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I did before put cluster back to working queue every resyncPeriod, and check its readiness on main sync loop.

Copy link
Contributor Author

@swiftslee swiftslee Feb 23, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't see put cluster back to working queue every resyncPeriod, but I saw check its readiness on main sync loop.. I think we don't need to put cluster back to working queue every resyncPeriod manually, the cluster informer does that automatically. The reason I check the cluster readiness separately is to check all of the cluster readiness, not only proxy connection. What you did before only checks the proxy cluster if has agent availably status, then updates the cluster status to ready or not. By using kubeconfig, I think it's more reliable(e.g. direct connection with kube-apiserver unreachable).

Copy link
Contributor Author

@swiftslee swiftslee Feb 24, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If put in the main sync loop, once the cluster is created/updated/deleted, the check will be performed, which may be too frequent. On the other hand, the check may be too long and will affect the sync loop. What is your suggestion?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's true, but we need to update cluster.status.configz every resyncPeriod too. So I suggest make config.timeout shorter, and probe in the main loop.

Copy link
Contributor Author

@swiftslee swiftslee Feb 24, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, we update the cluster.status.configz every resyncPeriod at the end of the main loop. The update didn't change.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, better to make config.timeout shorter

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

config.Timeout has been set as 3s by default. We can merge this pr now.

@@ -79,5 +79,5 @@ func (o *Options) AddFlags(fs *pflag.FlagSet, s *Options) {
"This field is used when generating deployment yaml for agent.")

fs.DurationVar(&o.ClusterControllerResyncSecond, "cluster-controller-resync-second", s.ClusterControllerResyncSecond,
"Cluster controller resync second to sync cluster resource.")
"Cluster controller resync second to sync cluster resource. e.g. 30s 60s 120s...")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

better to start 2m, 5m, 10m, small resync period increases load

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree with that. I will update the comment.

Signed-off-by: yuswift <yuswiftli@yunify.com>
@zryfish
Copy link
Member

zryfish commented Feb 25, 2021

/lgtm
/approve

@ks-ci-bot ks-ci-bot added the lgtm Indicates that a PR is ready to be merged. label Feb 25, 2021
@ks-ci-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: yuswift, zryfish

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@ks-ci-bot ks-ci-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Feb 25, 2021
@ks-ci-bot ks-ci-bot merged commit e48306d into kubesphere:master Feb 25, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. dco-signoff: yes kind/design Categorizes issue or PR as related to design. lgtm Indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

The tower server shouldn't update the cluster.status
3 participants