diff --git a/docs/book/src/developer/architecture/controllers/cluster.md b/docs/book/src/developer/architecture/controllers/cluster.md index 9a36d1b22696..4b34a95c822c 100644 --- a/docs/book/src/developer/architecture/controllers/cluster.md +++ b/docs/book/src/developer/architecture/controllers/cluster.md @@ -22,7 +22,10 @@ provisions EC2 instances that will become a Kubernetes cluster through some boot The cluster controller will set an OwnerReference on the infrastructureCluster. This controller should normally take no action during reconciliation until it sees the OwnerReference. -An infrastructureCluster controller is expected to eventually have its `spec.controlPlaneEndpoint` set by the user/controller. +An infrastructureCluster controller is expected to either supply a controlPlaneEndpoint (via its own `spec.controlPlaneEndpoint` field), +or rely on `spec.controlPlaneEndpoint` in its parent [Cluster](./cluster.md) object. + +If an endpoint is not provided, the implementer should exit reconciliation until it sees `cluster.spec.controlPlaneEndpoint` populated. The Cluster controller bubbles up `spec.controlPlaneEndpoint` and `status.ready` into `status.infrastructureReady` from the infrastructureCluster. @@ -50,7 +53,7 @@ is a map, defined as `map[string]FailureDomainSpec`. A unique key must be used f - `controlPlane` (bool): indicates if failure domain is appropriate for running control plane instances. - `attributes` (`map[string]string`): arbitrary attributes for users to apply to a failure domain. -Note: once any of `failureReason` or `failureMessage` surface on the cluster who is referencing the infrastructureCluster object, +Note: once any of `failureReason` or `failureMessage` surface on the cluster who is referencing the infrastructureCluster object, they cannot be restored anymore (it is considered a terminal error; the only way to recover is to delete and recreate the cluster). Example: diff --git a/docs/book/src/developer/architecture/controllers/control-plane.md b/docs/book/src/developer/architecture/controllers/control-plane.md index 64084f33c5b8..54c73cfdc687 100644 --- a/docs/book/src/developer/architecture/controllers/control-plane.md +++ b/docs/book/src/developer/architecture/controllers/control-plane.md @@ -41,12 +41,14 @@ Kubernetes control plane consisting of the following services: The Cluster controller will set an OwnerReference on the Control Plane. The Control Plane controller should normally take no action during reconciliation until it sees the ownerReference. -A Control Plane controller implementation should exit reconciliation until it sees `cluster.spec.controlPlaneEndpoint` populated. +A Control Plane controller implementation must either supply a controlPlaneEndpoint (via its own `spec.controlPlaneEndpoint` field), +or rely on `spec.controlPlaneEndpoint` in its parent [Cluster](./cluster.md) object. -The Cluster controller bubbles up `status.ready` into `status.controlPlaneReady` and `status.initialized` into a `controlPlaneInitialized` condition from the Control Plane CR. +If an endpoint is not provided, the implementer should exit reconciliation until it sees `cluster.spec.controlPlaneEndpoint` populated. + +A Control Plane controller can optionally provide a `controlPlaneEndpoint` -The `ImplementationControlPlane` *must* rely on the existence of -`status.controlplaneEndpoint` in its parent [Cluster](./cluster.md) object. +The Cluster controller bubbles up `status.ready` into `status.controlPlaneReady` and `status.initialized` into a `controlPlaneInitialized` condition from the Control Plane CR. ### CRD contracts @@ -110,6 +112,35 @@ documentation][scale]. deletion. A duration of 0 will retry deletion indefinitely. It defaults to 10 seconds on the Machine. +#### Optional `spec` fields for implementations providing endpoints + +The `ImplementationControlPlane` object may provide a `spec.controlPlaneEndpoint` field to inform the Cluster +controller where the endpoint is located. + +Implementers might opt to choose the `APIEndpoint` struct exposed by Cluster API types, or the following: + + + + + + + + + + + + + + + + + +
Field Type Description
hostString + The hostname on which the API server is serving. +
portInteger + The port on which the API server is serving. +
+ #### Required `status` fields The `ImplementationControlPlane` object **must** have a `status` object. diff --git a/docs/proposals/20230407-flexible-managed-k8s-endpoints.md b/docs/proposals/20230407-flexible-managed-k8s-endpoints.md index 3ff08ba066f8..4e8265b1871c 100644 --- a/docs/proposals/20230407-flexible-managed-k8s-endpoints.md +++ b/docs/proposals/20230407-flexible-managed-k8s-endpoints.md @@ -76,6 +76,7 @@ More specifically we would like to introduce first class support for two scenari - Permit omitting the `Cluster` entirely, thus making it simpler to use with Cluster API all the Managed Kubernetes implementations which do not require any additional Kubernetes Cluster Infrastructure (network settings, security groups, etc) on top of what is provided out of the box by the managed Kubernetes primitive offered by a Cloud provider. - Allow the `ControlPlane Provider` component to take ownership of the responsibility of creating the control plane endpoint, thus making it simpler to use with Cluster API all the Managed Kubernetes implementations which are taking care out of the box of this piece of Cluster Infrastructure. + - Note: In May 2024 [this pull request](https://github.com/kubernetes-sigs/cluster-api/pull/10667) added the ability for the control plane provider to provide the endpoint the same way the infrastructure cluster would. The above capabilities can be used alone or in combination depending on the requirements of a specific Managed Kubernetes or on the specific architecture/set of Cloud components being implemented. diff --git a/internal/controllers/cluster/cluster_controller_phases.go b/internal/controllers/cluster/cluster_controller_phases.go index 547c1dcf188a..d5c6778cfc04 100644 --- a/internal/controllers/cluster/cluster_controller_phases.go +++ b/internal/controllers/cluster/cluster_controller_phases.go @@ -199,7 +199,7 @@ func (r *Reconciler) reconcileInfrastructure(ctx context.Context, cluster *clust // Get and parse Spec.ControlPlaneEndpoint field from the infrastructure provider. if !cluster.Spec.ControlPlaneEndpoint.IsValid() { - if err := util.UnstructuredUnmarshalField(infraConfig, &cluster.Spec.ControlPlaneEndpoint, "spec", "controlPlaneEndpoint"); err != nil { + if err := util.UnstructuredUnmarshalField(infraConfig, &cluster.Spec.ControlPlaneEndpoint, "spec", "controlPlaneEndpoint"); err != nil && err != util.ErrUnstructuredFieldNotFound { return ctrl.Result{}, errors.Wrapf(err, "failed to retrieve Spec.ControlPlaneEndpoint from infrastructure provider for Cluster %q in namespace %q", cluster.Name, cluster.Namespace) } @@ -218,6 +218,8 @@ func (r *Reconciler) reconcileInfrastructure(ctx context.Context, cluster *clust // reconcileControlPlane reconciles the Spec.ControlPlaneRef object on a Cluster. func (r *Reconciler) reconcileControlPlane(ctx context.Context, cluster *clusterv1.Cluster) (ctrl.Result, error) { + log := ctrl.LoggerFrom(ctx) + if cluster.Spec.ControlPlaneRef == nil { return ctrl.Result{}, nil } @@ -274,6 +276,19 @@ func (r *Reconciler) reconcileControlPlane(ctx context.Context, cluster *cluster } } + if !ready { + log.V(3).Info("Control Plane provider is not ready yet") + return ctrl.Result{}, nil + } + + // Get and parse Spec.ControlPlaneEndpoint field from the control plane provider. + if !cluster.Spec.ControlPlaneEndpoint.IsValid() { + if err := util.UnstructuredUnmarshalField(controlPlaneConfig, &cluster.Spec.ControlPlaneEndpoint, "spec", "controlPlaneEndpoint"); err != nil && err != util.ErrUnstructuredFieldNotFound { + return ctrl.Result{}, errors.Wrapf(err, "failed to retrieve Spec.ControlPlaneEndpoint from control plane provider for Cluster %q in namespace %q", + cluster.Name, cluster.Namespace) + } + } + return ctrl.Result{}, nil } diff --git a/internal/controllers/cluster/cluster_controller_phases_test.go b/internal/controllers/cluster/cluster_controller_phases_test.go index 0df695613ecc..0bb2510f7755 100644 --- a/internal/controllers/cluster/cluster_controller_phases_test.go +++ b/internal/controllers/cluster/cluster_controller_phases_test.go @@ -32,6 +32,7 @@ import ( clusterv1 "sigs.k8s.io/cluster-api/api/v1beta1" capierrors "sigs.k8s.io/cluster-api/errors" "sigs.k8s.io/cluster-api/internal/test/builder" + "sigs.k8s.io/cluster-api/util/conditions" ) func TestClusterReconcilePhases(t *testing.T) { @@ -56,6 +57,22 @@ func TestClusterReconcilePhases(t *testing.T) { }, }, } + clusterNoEndpoint := &clusterv1.Cluster{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-cluster", + Namespace: "test-namespace", + }, + Status: clusterv1.ClusterStatus{ + InfrastructureReady: true, + }, + Spec: clusterv1.ClusterSpec{ + InfrastructureRef: &corev1.ObjectReference{ + APIVersion: "infrastructure.cluster.x-k8s.io/v1beta1", + Kind: "GenericInfrastructureMachine", + Name: "test", + }, + }, + } tests := []struct { name string @@ -63,6 +80,7 @@ func TestClusterReconcilePhases(t *testing.T) { infraRef map[string]interface{} expectErr bool expectResult ctrl.Result + check func(g *GomegaWithT, in *clusterv1.Cluster) }{ { name: "returns no error if infrastructure ref is nil", @@ -104,7 +122,7 @@ func TestClusterReconcilePhases(t *testing.T) { expectErr: false, }, { - name: "returns error if infrastructure has the paused annotation", + name: "returns no error if infrastructure has the paused annotation", cluster: cluster, infraRef: map[string]interface{}{ "kind": "GenericInfrastructureMachine", @@ -119,6 +137,50 @@ func TestClusterReconcilePhases(t *testing.T) { }, expectErr: false, }, + { + name: "returns no error if the control plane endpoint is not yet set", + cluster: clusterNoEndpoint, + infraRef: map[string]interface{}{ + "kind": "GenericInfrastructureMachine", + "apiVersion": "infrastructure.cluster.x-k8s.io/v1beta1", + "metadata": map[string]interface{}{ + "name": "test", + "namespace": "test-namespace", + "deletionTimestamp": "sometime", + }, + "status": map[string]interface{}{ + "ready": true, + }, + }, + expectErr: false, + }, + { + name: "should propagate the control plane endpoint once set", + cluster: clusterNoEndpoint, + infraRef: map[string]interface{}{ + "kind": "GenericInfrastructureMachine", + "apiVersion": "infrastructure.cluster.x-k8s.io/v1beta1", + "metadata": map[string]interface{}{ + "name": "test", + "namespace": "test-namespace", + "deletionTimestamp": "sometime", + }, + "spec": map[string]interface{}{ + "controlPlaneEndpoint": map[string]interface{}{ + "host": "example.com", + "port": int64(6443), + }, + }, + "status": map[string]interface{}{ + "ready": true, + }, + }, + expectErr: false, + check: func(g *GomegaWithT, in *clusterv1.Cluster) { + g.Expect(in.Spec.ControlPlaneEndpoint.Host).To(Equal("example.com")) + g.Expect(in.Spec.ControlPlaneEndpoint.Port).To(BeEquivalentTo(6443)) + }, + }, } for _, tt := range tests { @@ -148,6 +210,201 @@ func TestClusterReconcilePhases(t *testing.T) { } else { g.Expect(err).ToNot(HaveOccurred()) } + + if tt.check != nil { + tt.check(g, tt.cluster) + } + }) + } + }) + + t.Run("reconcile control plane ref", func(t *testing.T) { + cluster := &clusterv1.Cluster{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-cluster", + Namespace: "test-namespace", + }, + Status: clusterv1.ClusterStatus{ + InfrastructureReady: true, + }, + Spec: clusterv1.ClusterSpec{ + ControlPlaneEndpoint: clusterv1.APIEndpoint{ + Host: "1.2.3.4", + Port: 8443, + }, + ControlPlaneRef: &corev1.ObjectReference{ + APIVersion: "controlplane.cluster.x-k8s.io/v1beta1", + Kind: "GenericControlPlane", + Name: "test", + }, + }, + } + clusterNoEndpoint := &clusterv1.Cluster{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-cluster", + Namespace: "test-namespace", + }, + Status: clusterv1.ClusterStatus{ + InfrastructureReady: true, + }, + Spec: clusterv1.ClusterSpec{ + ControlPlaneRef: &corev1.ObjectReference{ + APIVersion: "controlplane.cluster.x-k8s.io/v1beta1", + Kind: "GenericControlPlane", + Name: "test", + }, + }, + } + + tests := []struct { + name string + cluster *clusterv1.Cluster + cpRef map[string]interface{} + expectErr bool + expectResult ctrl.Result + check func(g *GomegaWithT, in *clusterv1.Cluster) + }{ + { + name: "returns no error if control plane ref is nil", + cluster: &clusterv1.Cluster{ObjectMeta: metav1.ObjectMeta{Name: "test-cluster", Namespace: "test-namespace"}}, + expectErr: false, + }, + { + name: "requeues if unable to reconcile control plane ref", + cluster: cluster, + expectErr: false, + expectResult: ctrl.Result{RequeueAfter: 30 * time.Second}, + }, + { + name: "returns no error if control plane ref is marked for deletion", + cluster: cluster, + cpRef: map[string]interface{}{ + "kind": "GenericControlPlane", + "apiVersion": "controlplane.cluster.x-k8s.io/v1beta1", + "metadata": map[string]interface{}{ + "name": "test", + "namespace": "test-namespace", + "deletionTimestamp": "sometime", + }, + }, + expectErr: false, + }, + { + name: "returns no error if control plane has the paused annotation", + cluster: cluster, + cpRef: map[string]interface{}{ + "kind": "GenericControlPlane", + "apiVersion": "controlplane.cluster.x-k8s.io/v1beta1", + "metadata": map[string]interface{}{ + "name": "test", + "namespace": "test-namespace", + "annotations": map[string]interface{}{ + "cluster.x-k8s.io/paused": "true", + }, + }, + }, + expectErr: false, + }, + { + name: "returns no error if the control plane endpoint is not yet set", + cluster: clusterNoEndpoint, + cpRef: map[string]interface{}{ + "kind": "GenericControlPlane", + "apiVersion": "controlplane.cluster.x-k8s.io/v1beta1", + "metadata": map[string]interface{}{ + "name": "test", + "namespace": "test-namespace", + "deletionTimestamp": "sometime", + }, + "status": map[string]interface{}{ + "ready": true, + }, + }, + expectErr: false, + }, + { + name: "should propagate the control plane endpoint if set", + cluster: clusterNoEndpoint, + cpRef: map[string]interface{}{ + "kind": "GenericControlPlane", + "apiVersion": "controlplane.cluster.x-k8s.io/v1beta1", + "metadata": map[string]interface{}{ + "name": "test", + "namespace": "test-namespace", + "deletionTimestamp": "sometime", + }, + "spec": map[string]interface{}{ + "controlPlaneEndpoint": map[string]interface{}{ + "host": "example.com", + "port": int64(6443), + }, + }, + "status": map[string]interface{}{ + "ready": true, + }, + }, + expectErr: false, + check: func(g *GomegaWithT, in *clusterv1.Cluster) { + g.Expect(in.Spec.ControlPlaneEndpoint.Host).To(Equal("example.com")) + g.Expect(in.Spec.ControlPlaneEndpoint.Port).To(BeEquivalentTo(6443)) + }, + }, + { + name: "should propagate the initialized and ready conditions", + cluster: clusterNoEndpoint, + cpRef: map[string]interface{}{ + "kind": "GenericControlPlane", + "apiVersion": "controlplane.cluster.x-k8s.io/v1beta1", + "metadata": map[string]interface{}{ + "name": "test", + "namespace": "test-namespace", + "deletionTimestamp": "sometime", + }, + "spec": map[string]interface{}{}, + "status": map[string]interface{}{ + "ready": true, + "initialized": true, + }, + }, + expectErr: false, + check: func(g *GomegaWithT, in *clusterv1.Cluster) { + g.Expect(conditions.IsTrue(in, clusterv1.ControlPlaneReadyCondition)).To(BeTrue()) + g.Expect(conditions.IsTrue(in, clusterv1.ControlPlaneInitializedCondition)).To(BeTrue()) + }, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + g := NewWithT(t) + + var c client.Client + if tt.cpRef != nil { + cpConfig := &unstructured.Unstructured{Object: tt.cpRef} + c = fake.NewClientBuilder(). + WithObjects(builder.GenericControlPlaneCRD.DeepCopy(), tt.cluster, cpConfig). + Build() + } else { + c = fake.NewClientBuilder(). + WithObjects(builder.GenericControlPlaneCRD.DeepCopy(), tt.cluster). + Build() + } + r := &Reconciler{ + Client: c, + recorder: record.NewFakeRecorder(32), + } + + res, err := r.reconcileControlPlane(ctx, tt.cluster) + g.Expect(res).To(BeComparableTo(tt.expectResult)) + if tt.expectErr { + g.Expect(err).To(HaveOccurred()) + } else { + g.Expect(err).ToNot(HaveOccurred()) + } + + if tt.check != nil { + tt.check(g, tt.cluster) + } }) } })