Skip to content
This repository has been archived by the owner on May 22, 2020. It is now read-only.

initial implementation of machineset controller #620

Conversation

p0lyn0mial
Copy link
Contributor

What this PR does / why we need it: this PR brings an initial implementation of MachineSet controller. It watches for MachineSet resources and makes sure that the current state is equal to the desired one. At the moment the current state of the cluster is calculated based on the number of machines that are owned by the given machineSet resource. This essentially means that newly created resources have their OwnerReferences field set to the given MachineSet resource.

Release note:


@kubernetes/kube-deploy-reviewers

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Feb 21, 2018
@p0lyn0mial
Copy link
Contributor Author

p0lyn0mial commented Feb 21, 2018

@rsdcastro, @mrIncompetent, @realfake PTAL

@rsdcastro
Copy link

/cc @maisem
/assign @rsdcastro

@rsdcastro
Copy link

/cc @roberthbailey

if desiredReplicas < currentMachineCount {
glog.V(2).Infof("deleting a machine ( spec.replicas(%d) < currentMachineCount(%d) )", desiredReplicas, currentMachineCount)
machineToDelete := machines[0].Name
err := c.machineClient.ClusterV1alpha1().Machines().Delete(machineToDelete, &metav1.DeleteOptions{})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The machine should be drained prior to it being deleted.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And you can consider doing those in parallel given that draining might take a while.

For reference: https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@maisem the Machine controller, not the MachineSet should handle draining. Draining can be optional.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rsdcastro Machine resource creation happens instantly - the actual hardware provision can take some time, but it's not a concern of the MachineSet controller.


gv := clusterv1alpha1.SchemeGroupVersion
machine := &clusterv1alpha1.Machine{
ObjectMeta: metav1.ObjectMeta{
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about labels?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Set ObjectMeta to machineSet.Spec.Template.ObjectMeta and use GenerateName instead of Name

return machine, nil
}

// getMachines returns a list of machines that match on machineSet.UID
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't this match on labels?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure whether we would like to make use of labels or not. For example the ReplicaSet controller makes use of labels to adopt orphaned pods (the ones that has been added manually). Nevertheless I would consider this as an advanced topic. Do you think that we could add an issue, so that we can discuss/work on that in the future ? Unless we consider this as something that blocks this PR.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't use OwnerReferences - you want to remove a machine out of the MachineSet, do something with it, even delete the parent MachineSet without any impact of that machine. The Selector filed in the MachineSet is used exactly for this. Plus, querying for Machines that are part of your MachineSet will be easier.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not against. But do we really want to remove OwnerReferences ? The biggest value they bring at the moment is automatic clean up (GC) of dependend objects. In fact that's why I haven't implemented DeleteFunc function as it was not clear what it could do?

This seems to be a good starting point as we have clear and very simple lifecycle management. In addition we mirror the default behaviour of ReplicaSet - which deletes dependant object by default ?. I'm pretty sure that in the future we will add more strategies and policies around lifecycle management including the ability to remove MachineSet without removing dependent object so that they can be adopted by another MachineSet.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are trying to have the same experience and flow as in Deployment -> ReplicaSet -> Pod. Only the Deployment sets OwnerReferences to created ReplicaSets. ReplicaSets don't set OwnerReferences for Pods

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have chatted with Martin and we have went through existing ReplicaSet code. It seems like ReplicaSet sets OwnerReferences on pods.

More importantly during that discussion a suggestion came up that perhaps we should make an exact copy of an existing code (ReplicaSet) instead of crating our own controller from scratch. So what do you think ?

a) we should start with something small and easy and build up on that? This essentially means that we could continue with this PR.

b) we should stop working on this PR and create a new one which an exact copy of ReplicaSer with its all functionality ?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we could go both ways but I would lean towards starting with something small and building up on that so we can incorporate features as we are sure they make sense in machine management.

There might be differences between replica set and machine set that we need to consider. For example:

  • In RS, there is garbage collection. What does garbage collection mean for machines? Are we going to have orphan machines that need to be collected? As I am not sure about how that would work, I'd consider not integrating that yet.
  • There are properties that might be specific to pods, like burstReplicas, that we might need to think what that means for machines. We probably want to control any bursts too, but it might not be the same as bursting pods.
  • In addition to those, creating and deleting machines might be more expensive and/or have consequences than pods. There are cases for clusters on premises that adding a machine is a very very long process (like filing a ticket for another team to provision a bare metal machine or go configure a VM on vSphere), which changes the characteristics of the problem.

In addition to that, I would personally prefer to add piece by piece and make sure they are well tested so it gives us time - as community - to start absorbing that code that we will own from this point on.

I hope that helps us move forward here, but also open to discuss further if others have any opinion about this.

@rsdcastro
Copy link

/cc @justinsb
/cc @krousey

@@ -0,0 +1,28 @@
# MachineSet Controller
is an initial implementation of `MachineSet` controller. It watches for `MachineSet` resources

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd remove "initial" as this will evolve (and then we don't need to update the README.md :-).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay.

providerConfig: >
{
"apiVersion": "gceproviderconfig/v1alpha1",
"kind": "GCEProviderConfig",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like that you didn't add the controller to cluster-api-gcp (since we need to move the common code out of it), but shouldn't the yaml specific to GCP be in cluster-api-gcp/cloud/google and equivalent ext-apiserver dir?

Do you plan to have this installed as part of the cluster provisioning as a next step?

if desiredReplicas < currentMachineCount {
glog.V(2).Infof("deleting a machine ( spec.replicas(%d) < currentMachineCount(%d) )", desiredReplicas, currentMachineCount)
machineToDelete := machines[0].Name
err := c.machineClient.ClusterV1alpha1().Machines().Delete(machineToDelete, &metav1.DeleteOptions{})

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And you can consider doing those in parallel given that draining might take a while.

For reference: https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/

}

// New returns an instance of the MachineSet controller
func New(machineClient machineclientset.Interface, machineSetsInformer cache.SharedIndexInformer, machineSetsLister machinelistersv1alpha1.MachineSetLister, machineLister machinelistersv1alpha1.MachineLister) *Controller {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please consider adding some tests. @karan started adding unit tests here, which could be used as reference: #575

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I'll add a unit test.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added some tests (check out TestMachineSetControllerSyncHanlder)

if err != nil {
return err
}
_, err = c.machineClient.ClusterV1alpha1().Machines().Create(machine)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could this be done in parallel? If we don't want to do it now, perhaps a setting to consider (please track in an issue).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, we could calculate the number of desired replicas and create each in a separate goroutine. As @mvladev has already alluded creating a machine is quite fast so perhaps this is a future concern.


if desiredReplicas < currentMachineCount {
glog.V(2).Infof("deleting a machine ( spec.replicas(%d) < currentMachineCount(%d) )", desiredReplicas, currentMachineCount)
machineToDelete := machines[0].Name

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

machines[0]: do we have any stable order so that means that we pick always the oldest machine or no guarantees here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that we don’t have stable order. I think it boils down to iterating over a map which in go is random.


if desiredReplicas < currentMachineCount {
glog.V(2).Infof("deleting a machine ( spec.replicas(%d) < currentMachineCount(%d) )", desiredReplicas, currentMachineCount)
machineToDelete := machines[0].Name

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should support more policies on which machine to pick - no need to be done here, but please file an issue.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good idea, I’ll add an issue.

// the name of the newly created resource is created as machineSet.Name + some random characters.
// in addition, the newly created resource is owned by the given machineSet (we set the OwnerReferences field)
func (c *Controller) createMachine(machineSet *clusterv1alpha1.MachineSet) (*clusterv1alpha1.Machine, error) {
machineName := machineSet.Name + "-" + string(uuid.NewUUID())[:6]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At some point we might be prepared for a name conflict here and handle it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

POSTing a machine with the same name will yield HTTP 409 error, hopefully on the next run we will generate a different name. If my memory serves me well we can generate 32 to the power of 6 different names. Which gives us a decent number :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could use the metadata.generateName field so the apiserver takes care of generating a name.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good suggestion, @mrIncompetent. @p0lyn0mial do you think you can incorporate this suggestion?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, good idea.

}
_, err = c.machineClient.ClusterV1alpha1().Machines().Create(machine)
if err != nil {
return err

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any retryable error? Or - dumb question - will this reconcile loop be triggered again even if I don't update the machine set object?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. At the moment we don’t distinguish errors. An item will be added to the queue on an error (call to AddRateLimited). In the future we can track how many times an item was requeued (NumRequeues) and act accordingly.

machineToDelete := machines[0].Name
err := c.machineClient.ClusterV1alpha1().Machines().Delete(machineToDelete, &metav1.DeleteOptions{})
if err != nil {
return err

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar question as above. If it fails, will this reconcile be retriggered even w/o a change to the machinset object? Concern is if we might bail out after the first error and not fulfill the intent until the user updates the object again (but this could be a dumb question).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the loop will be triggered as we requeue on an error.

@rsdcastro
Copy link

This addresses issue #535.

@medinatiger
Copy link
Contributor

as we move from CRD to api aggregation. Once #616 is merged, please refactored the PR into the new directory. The cluster-api/cluster-api-gcp directory will be deleted soon.

@rsdcastro
Copy link

@p0lyn0mial Please address @medinatiger's concern (sorry about the trouble during the code migration). Once you address the remaining open comments, I will take another pass. Thanks!

controller.queue.AddRateLimited(key)
}
},
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about DeleteFunc?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please, see my comment about OwnerReferences

func (c *Controller) syncReplicas(machineSet *v1alpha1.MachineSet, machines []*v1alpha1.Machine) error {
currentMachineCount := int32(len(machines))
desiredReplicas := int32(0)
if machineSet.Spec.Replicas != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default value for this field should be handled by the API server. It should never be nil

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the last time I checked the API server didn't set the default value. I am not sure if this is a bug or we should actually employ a mutating webhook for that? Note that we are using CRDs

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't have any defaulters and validation yet. For CRDs this will be nil unless we add mutating webhook

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Once it's moved to api aggregation world. We can have defaulting implemented easily.

if err != nil {
return err
}
if err = waitForMachineInCache(machine.Name, c.machineLister); err != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this needed? You should only begin the reconcile loop when all your caches are populated.

if err != nil {
return err
}
if err = waitForMachineDeletedInCache(machineToDelete, c.machineLister); err != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this needed? You should only begin the reconcile loop when all your caches are populated.

return machine, nil
}

// getMachines returns a list of machines that match on machineSet.UID
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't use OwnerReferences - you want to remove a machine out of the MachineSet, do something with it, even delete the parent MachineSet without any impact of that machine. The Selector filed in the MachineSet is used exactly for this. Plus, querying for Machines that are part of your MachineSet will be easier.

machineCacheCheckTimeout = 10 * time.Second
)

func waitForMachineInCache(name string, lister machinelistersv1alpha1.MachineLister) error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this needed?

})
}

func waitForMachineDeletedInCache(name string, lister machinelistersv1alpha1.MachineLister) error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this needed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you are right, this brings more confusion than value.


gv := clusterv1alpha1.SchemeGroupVersion
machine := &clusterv1alpha1.Machine{
ObjectMeta: metav1.ObjectMeta{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Set ObjectMeta to machineSet.Spec.Template.ObjectMeta and use GenerateName instead of Name

@k8s-ci-robot k8s-ci-robot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Feb 28, 2018
@@ -5,6 +5,8 @@ metadata:
spec:
replicas: 3
template:
metadata:
generateName: my-first-machineset-
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

alternatively we could take the name from metadata.name

@medinatiger
Copy link
Contributor

we have a machineset controller skeleton checked in. Please found that in ext-apiserve/pkg/controller/machineset
Please rebase the current code in the new directory. Otherwise, we have to manually do it again. We are shooting to deprecate the current cluster-api and cluster-api-gcp directory within a week. Add a hold until the rebase is done.
/hold

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Feb 28, 2018
machineLister := machineInformerFactory.Cluster().V1alpha1().Machines().Lister()

go machineInformerFactory.Start(stopCh)
for _, syncsMap := range []map[reflect.Type]bool{machineInformerFactory.WaitForCacheSync(stopCh)} {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not:

if !cache.WaitForCacheSync(make(<-chan struct{}), machineSetsInformer.HasSynced, machineInformer.HasSynced) {
		// panic
}

replicas: 3
template:
metadata:
generateName: my-first-machineset-
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

generateName needs to be removed. It should be set by the controller.

spec:
replicas: 3
template:
metadata:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add lables and selector that matches them

func (c *Controller) syncReplicas(machineSet *v1alpha1.MachineSet, machines []*v1alpha1.Machine) error {
currentMachineCount := int32(len(machines))
desiredReplicas := int32(0)
if machineSet.Spec.Replicas != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't have any defaulters and validation yet. For CRDs this will be nil unless we add mutating webhook

return machine, nil
}

// getMachines returns a list of machines that match on machineSet.UID
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are trying to have the same experience and flow as in Deployment -> ReplicaSet -> Pod. Only the Deployment sets OwnerReferences to created ReplicaSets. ReplicaSets don't set OwnerReferences for Pods

Kind: gv.WithKind("Machine").Kind,
APIVersion: gv.String(),
},
ObjectMeta: machineSet.Spec.Template.ObjectMeta,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need to set the generateName to ObjectMeta

@p0lyn0mial p0lyn0mial force-pushed the initial_machineset_controller branch from f85785e to f06af75 Compare March 6, 2018 15:27
@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Mar 6, 2018
@p0lyn0mial p0lyn0mial force-pushed the initial_machineset_controller branch from f06af75 to 4d8e94d Compare March 6, 2018 15:39
@p0lyn0mial
Copy link
Contributor Author

p0lyn0mial commented Mar 6, 2018

@medinatiger code moved to a new location.
@rsdcastro @mvladev @mrIncompetent @medinatiger @maisem comments addressed PTAL

// TODO: the following test fails with:
// the namespace of the provided object does not match the namespace sent on the request
// uncomment when fixed
//crudAccessToMachineSetClient(t, cs)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is very odd, especially that the machines tests work.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think all the current machine/cluster resource assume default namespace. Is this the case with MachineSet?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's not the case, I haven't changed anything in that space. I have just added this piece of code that executes existing test. I think there must be some mismatch between Machine and MachineSet regarding setting default namespace.

@@ -130,16 +131,39 @@ type MachineSetStatus struct {

// Validate checks that an instance of MachineSet is well formed
func (MachineSetStrategy) Validate(ctx request.Context, obj runtime.Object) field.ErrorList {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

usually this is placed close to a registry pkg, since we are using a framework to crate an api extension I am not sure whether this is a good place to put validation code ? Also do I have to regenerate code ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless you are changing type definition, you don't need to regenerate code. Even if you are regenerating code, machineset_types.go won't be changed.

desiredReplicas := *machineSet.Spec.Replicas
diff := int(currentMachineCount - desiredReplicas)

if diff < 0 {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

at the moment resync is set to 10 minutes thus I decided to create/delete resources in a loop so that the results are visible more quickly.

Copy link
Contributor Author

@p0lyn0mial p0lyn0mial Mar 7, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that in the future we could also add watch events on machines - that would "wake up" appropriate replica.

^-------------- #643

errors = append(errors, field.Invalid(fldPath.Child("template","metadata","labels"), machineSet.Spec.Template.Labels, "`selector` does not match template `labels`"))
}
}

return errors
}

// DefaultingFunction sets default MachineSet field values
func (MachineSetSchemeFns) DefaultingFunction(o interface{}) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could also add defaults for an empty selector, e.g. take machine's labels

@rsdcastro
Copy link

lgtm

Thank you for rebasing it and for adding the tests as well. I copied @medinatiger to address some of the questions above

@mvladev Can you take another pass? Once you LGTM it, I can merge it.

@medinatiger
Copy link
Contributor

Thanks very much for rebasing the code!

@rsdcastro
Copy link

Please provide any feedback or share any major objection to getting this merged by end of day.

@rsdcastro
Copy link

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Mar 11, 2018
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: p0lyn0mial, rsdcastro

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 11, 2018
@rsdcastro
Copy link

/hold cancel

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Mar 12, 2018
@k8s-ci-robot k8s-ci-robot merged commit 136cdac into kubernetes-retired:master Mar 12, 2018
k4leung4 pushed a commit to k4leung4/kube-deploy that referenced this pull request Apr 4, 2018
…hineset_controller

initial implementation of machineset controller
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm Indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants