New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
✨Add clusterctl
options to show templates and cluster resource sets
#5762
Conversation
Hi @Jont828. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/ok-to-test |
Actually we can hold this for now, I'm still trying to see if I can add some other features and also clean up some of the code. /hold |
/cc |
7c685e1
to
eefd1f2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given this is still WIP, feel free to ignore comments
8a9fa99
to
dff76ce
Compare
c08f88d
to
b43f4ee
Compare
clusterctl describe
[WIP]clusterctl describe
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A couple of comments, mostly for
- better understanding the use case you are trying to address
- avoid mixing semantics of flags/flags with semantics too generic
- avoid to load too many objects at T0 (we can consider incremental loading if it makes sense)
@@ -93,37 +106,110 @@ func Discovery(ctx context.Context, c client.Client, namespace, name string, opt | |||
|
|||
if visible { | |||
if machineInfra, err := external.Get(ctx, c, &m.Spec.InfrastructureRef, cluster.Namespace); err == nil { | |||
tree.Add(m, machineInfra, ObjectMetaName("MachineInfrastructure"), NoEcho(true)) | |||
tree.Add(m, machineInfra, ObjectMetaName("MachineInfrastructure"), NoEcho(!options.Echo)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fabriziopandini Do you think we'd want to add a placeholder unstructured object in this case as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
might be in a followUp PR, let's keep this one scoped (it is already adding a bunch of flag in the same changeset)
} | ||
tree.Add(workers, mp, addOpts...) | ||
|
||
if options.ShowTemplates { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could probably add a "ShowMachinePools" flag similar to "ShowMachineSets" and replicate the behavior once MachinePoolMachines are added.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually we might not need that at all since we have --echo
and --grouping=false
already
If I get you the assumptions you can make are part of the contract e.g. for spec fields: https://cluster-api.sigs.k8s.io/developer/architecture/controllers/control-plane.html#required-spec-fields-for-implementations-using-machines. So you should have e.g. a control plane on my dev cluster has the spec: spec:
kubeadmConfigSpec:
clusterConfiguration:
apiServer:
certSANs:
- localhost
- 127.0.0.1
- 0.0.0.0
controllerManager:
extraArgs:
enable-hostpath-provisioner: "true"
dns: {}
etcd:
local: {}
imageRepository: k8s.gcr.io
networking: {}
scheduler: {}
format: cloud-config
initConfiguration:
localAPIEndpoint: {}
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
kubeletExtraArgs:
cgroup-driver: cgroupfs
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
joinConfiguration:
discovery: {}
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
kubeletExtraArgs:
cgroup-driver: cgroupfs
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
machineTemplate:
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerMachineTemplate
name: cloister-control-plane-lmbll
namespace: default
metadata:
labels:
cluster.x-k8s.io/cluster-name: cloister
topology.cluster.x-k8s.io/owned: ""
replicas: 3
rolloutStrategy:
rollingUpdate:
maxSurge: 1
type: RollingUpdate
version: v1.21.2
|
@killianmuldoon Yeah I'm seeing the |
The // GenericMachineTemplate contains the generic machine template.
type GenericMachineTemplate struct {
InfrastructureRef corev1.ObjectReference `json:"infrastructureRef"`
}
// GenericControlPlaneSpec contains a generic control plane spec.
type GenericControlPlaneSpec struct {
MachineTemplate GenericMachineTemplate `json:"machineTemplate"`
} should be fine. I believe it will not affect any of the other tests (I have tested this locally) as all others tests that use PS: You will have to run |
@ykakarap Sounds good, I wrote some tests, including the edge case you found with having two machine deployments. I tagged Fabrizio in a minor comment about the test cases, but otherwise I think we should be good for a final review pass. |
e463824
to
c2af91e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have tested this locally and it works great!
please fix lint errors and remove the hold, IMO we are ready to merge!
/lgtm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tested locally and everything seems to bo working as expected.
This is great work @Jont828 🚀
Thanks for doing this!
Just small nits (non-blocking).
/lgtm
Just pushed some changes to fix the nits. Huge shoutout to @fabriziopandini for tons of feedback and design suggestions! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: fabriziopandini The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What this PR does / why we need it: This PR adds additional options to the Discovery client for
clusterctl describe
. These changes are meant to (1) allow the command to show all resources associated with the cluster and (2) allow the client to construct a tree that can be used as an input for a separate Cluster API GUI.Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):