New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor KubemarkCluster into ComputeCluster #66
Refactor KubemarkCluster into ComputeCluster #66
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: fabriziopandini The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just giving a cursory review of the code, the BackingCluster
name caught my eye
// in case also KubemarkCluster.spec.backingCluster is empty or if the cluster uses a different infrastructure cluster kind like | ||
// e.g. DockerCluster, pods running kubemark will be created in the management cluster, in the same namespace of the cluster resource. | ||
// +optional | ||
BackingCluster *BackingClusterSpec `json:"backingCluster,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BackingCluster
sounds a little ambiguous to me, maybe we could call it KubemarkPodHostCluster
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@elmiko what about ComputeCluster
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looking at the old value that this new approach will deprecate, we named it KubemarkHollowPodClusterSecretRef
, what if we call this new field KubemarkPodCluster
?
i just don't want users to get confused when they see this, i feel like we have a couple references to different "clusters" and the topology can become confusing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have the same concern, but when I looked at the old name my reaction was that this is exposing a lot of internals the user should not really care about (the fact that we run a binary that is called kubemark, the fact that this thing runs a Pod etc).
It is also limiting because we are not creating only pods, but also secrets, and potentially more in the future, and (nit) having a field in a Kubemark* object that repeats kubemark kind of stutter
This is is why I would prefer something that is easier to explain
- this is where the backend of my machines runs (BackingCluster)
- this is the thing that provides computing power to my machines (ComputeCluster)
- (???)
But I leave to you the final call, just tell me what do you pick
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i like your reasoning, i'm happy to go forward with ComputeCluster. thanks for explaining =)
f9b4e8e
to
b758b2c
Compare
/hold cancel |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i was able to test this using a kind cluster and the docker control plane method of deployment. when i used the ComputeCluster field i could see kubemark pods spawning on the target cluster.
i did hit one snag that i think we should document before merging, i left a comment inline
b758b2c
to
c2f5cb3
Compare
@fabriziopandini: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
the upstream work in cluster-api is still on-going, i'm happy to keep this pull open. /remove-lifecycle rotten |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What this PR does / why we need it:
Refactor the current KubemarkCluster into a compute cluster; this is required for two reasons:
Which issue this PR fixes:
rif: #63
Special notes for your reviewer:
Release notes: