Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reworked bootstrap controllers #212

Merged
merged 5 commits into from
May 23, 2023
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions controllers/pkg/reconcilers/bootstrap-packages/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# bootstrap package controller

The bootstrap controller is a k8s controller which goal is to bootstrap packages on a newly installed cluster. This ensure e.g. that a gitops tool like `config-sync` is installed on a cluster and subsequent configurations tasks get handled through the gitops toolchain.

## implementation

The controller acts on package revision resources. It first figures out if the resources of a package revision are to be installed on the remote cluster, by checking if:
- repository has the `nephio.org/staging` key set

If the controller knows the package is to be installed on the remote cluster it finds the cluster name by checking the `nephio.org/cluster-name` annotation of the first resource in the package. (we assume the `nephio.org/cluster-name` annotation is set on all resources). Once the controller knows the clustername it finds the credentials of the remote cluster and the type of cluster based on the signatures of the secret (right now only cluster api is implemented, but the code is able to handle other implementations).
Once the remote credentials are found and the cluster is deemed ready, the package get installed on the remote cluster.

If any of the validation fail the controller will retry installing the package. Right now the watch on package reviosions is a timed based loop.
henderiw marked this conversation as resolved.
Show resolved Hide resolved

Multiple packages can be installed by the bootstrap package controller as long as they are made available in a repo with the annotation key `nephio.org/staging` and a corresponding annotation `nephio.org/cluster-name` is set on the resources of the package.
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ See the License for the specific language governing permissions and
limitations under the License.
*/

package bootstrap
package bootstrappackages

import (
"context"
Expand All @@ -36,46 +36,53 @@ import (
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/types"

"k8s.io/apimachinery/pkg/util/yaml"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/kustomize/kyaml/kio"
"sigs.k8s.io/kustomize/kyaml/kio/filters"
"sigs.k8s.io/kustomize/kyaml/kio/kioutil"
)

func init() {
reconcilerinterface.Register("bootstrap", &reconciler{})
reconcilerinterface.Register("bootstrappackages", &reconciler{})
}

const (
clusterNameKey = "nephio.org/cluster-name"
stagingNameKey = "nephio.org/staging"
stagingNameKey = "nephio.org/staging"
clusterNameKey = "nephio.org/cluster-name"
configsyncNamespace = "config-management-system"
)

//+kubebuilder:rbac:groups="*",resources=secrets,verbs=get;list;watch
//+kubebuilder:rbac:groups=cluster.x-k8s.io,resources=clusters,verbs=get;list;watch
//+kubebuilder:rbac:groups=cluster.x-k8s.io,resources=clusters/status,verbs=get
//+kubebuilder:rbac:groups=porch.kpt.dev,resources=packagerevisions,verbs=get;list;watch
//+kubebuilder:rbac:groups=porch.kpt.dev,resources=packagerevisions/status,verbs=get
//+kubebuilder:rbac:groups=config.porch.kpt.dev,resources=repositories,verbs=get;list;watch

// SetupWithManager sets up the controller with the Manager.
func (r *reconciler) SetupWithManager(mgr ctrl.Manager, c interface{}) (map[schema.GroupVersionKind]chan event.GenericEvent, error) {
func (r *reconciler) SetupWithManager(mgr ctrl.Manager, c any) (map[schema.GroupVersionKind]chan event.GenericEvent, error) {
cfg, ok := c.(*ctrlconfig.ControllerConfig)
if !ok {
return nil, fmt.Errorf("cannot initialize, expecting controllerConfig, got: %s", reflect.TypeOf(c).Name())
}

if err := porchv1alpha1.AddToScheme(mgr.GetScheme()); err != nil {
return nil, err
}
if err := porchconfigv1alpha1.AddToScheme(mgr.GetScheme()); err != nil {
return nil, err
}

r.Client = mgr.GetClient()
r.porchClient = cfg.PorchClient

return nil, ctrl.NewControllerManagedBy(mgr).
Named("BootstrapController").
For(&corev1.Secret{}).
Named("BootstrapPackageController").
For(&porchv1alpha1.PackageRevision{}).
Complete(r)
}

Expand All @@ -88,36 +95,51 @@ type reconciler struct {

func (r *reconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
r.l = log.FromContext(ctx)

cr := &corev1.Secret{}
cr := &porchv1alpha1.PackageRevision{}
if err := r.Get(ctx, req.NamespacedName, cr); err != nil {
// if the resource no longer exists the reconcile loop is done
// There's no need to requeue if we no longer exist. Otherwise we'll be
// requeued implicitly because we return an error.
if resource.IgnoreNotFound(err) != nil {
msg := "cannot get resource"
r.l.Error(err, msg)
return ctrl.Result{}, errors.Wrap(resource.IgnoreNotFound(err), msg)
}
return reconcile.Result{}, nil
return ctrl.Result{}, nil
}

// if the secret is being deleted dont do anything for now
if cr.DeletionTimestamp != nil {
return reconcile.Result{}, nil
// check if the packagerevision is part of a staging repository
// if not we can ignore this package revision
stagingPR, err := r.IsStagingPackageRevision(ctx, cr)
henderiw marked this conversation as resolved.
Show resolved Hide resolved
if err != nil {
msg := "cannot list repositories"
r.l.Error(err, msg)
return ctrl.Result{}, errors.Wrap(err, msg)
}

// this branch handles installing the secrets to the remote cluster
if cr.GetNamespace() == "config-management-system" {
r.l.Info("reconcile")
clusterName, ok := cr.GetAnnotations()[clusterNameKey]
if !ok {
return reconcile.Result{}, nil
if stagingPR {
r.l.Info("reconcile package revision")
resources, namespacePresent, err := r.getResources(ctx, req)
if err != nil {
msg := "cannot get resources"
r.l.Error(err, msg)
return ctrl.Result{}, errors.Wrap(err, msg)
}
if clusterName != "mgmt" {
// we expect the clusterName to be applied to all resources in the
// package revision resources, so we find the clustername by looking at the
// first resource in the resource list
if len(resources) > 0 {
clusterName, ok := resources[0].GetAnnotations()[clusterNameKey]
if !ok {
r.l.Info("clusterName not found",
"resource", fmt.Sprintf("%s.%s.%s", resources[0].GetAPIVersion(), resources[0].GetKind(), resources[0].GetName()),
"annotations", resources[0].GetAnnotations())
return ctrl.Result{}, nil
}
// we need to find the cluster client
secrets := &corev1.SecretList{}
if err := r.List(ctx, secrets); err != nil {
msg := "cannot list secrets"
r.l.Error(err, msg)
return ctrl.Result{RequeueAfter: 5 * time.Second}, errors.Wrap(err, msg)
return ctrl.Result{}, errors.Wrap(err, msg)
}
found := false
for _, secret := range secrets.Items {
Expand All @@ -136,71 +158,48 @@ func (r *reconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Resu
r.l.Info("cluster not ready")
return ctrl.Result{RequeueAfter: 10 * time.Second}, nil
}
ns := &corev1.Namespace{}
if err = clusterClient.Get(ctx, types.NamespacedName{Name: cr.GetNamespace()}, ns); err != nil {
if resource.IgnoreNotFound(err) != nil {
msg := fmt.Sprintf("cannot get namespace: %s", secret.GetNamespace())
r.l.Error(err, msg)
return ctrl.Result{RequeueAfter: 30 * time.Second}, errors.Wrap(err, msg)
if !namespacePresent {
ns := &corev1.Namespace{}
if err = clusterClient.Get(ctx, types.NamespacedName{Name: configsyncNamespace}, ns); err != nil {
if resource.IgnoreNotFound(err) != nil {
msg := fmt.Sprintf("cannot get namespace: %s", configsyncNamespace)
r.l.Error(err, msg)
return ctrl.Result{RequeueAfter: 30 * time.Second}, errors.Wrap(err, msg)
}
msg := fmt.Sprintf("namespace: %s, does not exist, retry...", configsyncNamespace)
r.l.Info(msg)
return ctrl.Result{RequeueAfter: 10 * time.Second}, nil
}
msg := fmt.Sprintf("namespace: %s, does not exist, retry...", cr.GetNamespace())
r.l.Info(msg)
return ctrl.Result{RequeueAfter: 10 * time.Second}, nil
}

if err := clusterClient.Apply(ctx, cr); err != nil {
msg := fmt.Sprintf("cannot apply secret to cluster %s", clusterName)
r.l.Error(err, msg)
return ctrl.Result{RequeueAfter: 10 * time.Second}, errors.Wrap(err, msg)
// install resources
for _, resource := range resources {
resource := resource // required to prevent gosec warning: G601 (CWE-118): Implicit memory aliasing in for loop
r.l.Info("install manifest", "resource",
fmt.Sprintf("%s.%s.%s", resource.GetAPIVersion(), resource.GetKind(), resource.GetName()))
if err := clusterClient.Apply(ctx, &resource); err != nil {
msg := fmt.Sprintf("cannot apply resource to cluster: resourceName: %s", resource.GetName())
r.l.Error(err, msg)
return ctrl.Result{}, errors.Wrap(err, msg)
}
}
}
}
}
if !found {
// the clusterclient was not found, we retry
r.l.Info("cluster client not found, retry...")
return ctrl.Result{RequeueAfter: 10 * time.Second}, nil
}
}
return ctrl.Result{}, nil
}

// this branch handles manifest installation
cl, ok := cluster.Cluster{Client: r.Client}.GetClusterClient(cr)
if ok {
r.l.Info("reconcile")
clusterClient, ready, err := cl.GetClusterClient(ctx)
if err != nil {
msg := "cannot get clusterClient"
r.l.Error(err, msg)
return ctrl.Result{RequeueAfter: 30 * time.Second}, errors.Wrap(err, msg)
}
if !ready {
r.l.Info("cluster not ready")
return ctrl.Result{RequeueAfter: 10 * time.Second}, nil
}

// install resources
resources, err := r.getResources(ctx, cl.GetClusterName())
if err != nil {
msg := "cannot get resources"
r.l.Error(err, msg)
return ctrl.Result{RequeueAfter: 10 * time.Second}, errors.Wrap(err, msg)
}
for _, resource := range resources {
resource := resource // required to prevent gosec warning: G601 (CWE-118): Implicit memory aliasing in for loop
r.l.Info("install manifest", "resources",
fmt.Sprintf("%s.%s.%s", resource.GetAPIVersion(), resource.GetKind(), resource.GetName()))
if err := clusterClient.Apply(ctx, &resource); err != nil {
r.l.Error(err, "cannot apply resource to cluster", "name", resource.GetName())
}
}
}
return ctrl.Result{}, nil
}

func (r *reconciler) getResources(ctx context.Context, clusterName string) ([]unstructured.Unstructured, error) {
func (r *reconciler) IsStagingPackageRevision(ctx context.Context, cr *porchv1alpha1.PackageRevision) (bool, error) {
repos := &porchconfigv1alpha1.RepositoryList{}
if err := r.porchClient.List(ctx, repos); err != nil {
return nil, err

return false, err
}

stagingRepoNames := []string{}
Expand All @@ -209,45 +208,22 @@ func (r *reconciler) getResources(ctx context.Context, clusterName string) ([]un
stagingRepoNames = append(stagingRepoNames, repo.GetName())
}
}

prList := &porchv1alpha1.PackageRevisionList{}
if err := r.porchClient.List(ctx, prList); err != nil {
return nil, err
}

prKeys := []types.NamespacedName{}
for _, pr := range prList.Items {
if MatchesStagingRepo(pr.Spec.RepositoryName, stagingRepoNames) && pr.Annotations[clusterNameKey] == clusterName {
prKeys = append(prKeys, types.NamespacedName{Name: pr.GetName(), Namespace: pr.GetNamespace()})
}

}
resources := []unstructured.Unstructured{}
for _, prKey := range prKeys {
prr := &porchv1alpha1.PackageRevisionResources{}
if err := r.porchClient.Get(ctx, prKey, prr); err != nil {
r.l.Error(err, "cannot get package resvision resourcelist", "key", prKey)
return nil, err
}

res, err := r.getResourcesPRR(prr.Spec.Resources)
if err != nil {
r.l.Error(err, "cannot get resources", "key", prKey)
return nil, err
for _, stagingRepoName := range stagingRepoNames {
if cr.Spec.RepositoryName == stagingRepoName {
return true, nil
}
resources = append(resources, res...)
}

return resources, nil
return false, nil
}

func MatchesStagingRepo(repoName string, stagingRepoNames []string) bool {
for _, stagingRepoName := range stagingRepoNames {
if repoName == stagingRepoName {
return true
}
func (r *reconciler) getResources(ctx context.Context, req ctrl.Request) ([]unstructured.Unstructured, bool, error) {
prr := &porchv1alpha1.PackageRevisionResources{}
if err := r.porchClient.Get(ctx, req.NamespacedName, prr); err != nil {
r.l.Error(err, "cannot get package resvision resourcelist", "key", req.NamespacedName)
henderiw marked this conversation as resolved.
Show resolved Hide resolved
return nil, false, err
}
return false

return r.getResourcesPRR(prr.Spec.Resources)
}

func includeFile(path string, match []string) bool {
Expand All @@ -260,7 +236,7 @@ func includeFile(path string, match []string) bool {
return false
}

func (r *reconciler) getResourcesPRR(resources map[string]string) ([]unstructured.Unstructured, error) {
func (r *reconciler) getResourcesPRR(resources map[string]string) ([]unstructured.Unstructured, bool, error) {
inputs := []kio.Reader{}
for path, data := range resources {
if includeFile(path, []string{"*.yaml", "*.yml", "Kptfile"}) {
Expand All @@ -280,9 +256,10 @@ func (r *reconciler) getResourcesPRR(resources map[string]string) ([]unstructure
Outputs: []kio.Writer{&pb},
}.Execute()
if err != nil {
return nil, err
return nil, false, err
}

namespacepresent := false
ul := []unstructured.Unstructured{}
for _, n := range pb.Nodes {
if v, ok := n.GetAnnotations()[filters.LocalConfigAnnotation]; ok && v == "true" {
Expand All @@ -294,7 +271,10 @@ func (r *reconciler) getResourcesPRR(resources map[string]string) ([]unstructure
// we dont fail
continue
}
if u.GetKind() == reflect.TypeOf(corev1.Namespace{}).Name() {
namespacepresent = true
}
ul = append(ul, u)
}
return ul, nil
return ul, namespacepresent, nil
}
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ See the License for the specific language governing permissions and
limitations under the License.
*/

package bootstrap
package bootstrappackages

import (
"fmt"
Expand Down Expand Up @@ -75,7 +75,7 @@ spec:
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
r := reconciler{}
us, err := r.getResourcesPRR(tc.resources)
us, _, err := r.getResourcesPRR(tc.resources)

if tc.expectedErr {
assert.Error(t, err)
Expand Down
16 changes: 16 additions & 0 deletions controllers/pkg/reconcilers/bootstrap-secret/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# bootstrap secret controller

The bootstrap controller is a k8s controller which goal is to bootstrap secrets on a newly installed cluster. This ensure e.g. that a gitops tool like `config-sync` is installed on a cluster and subsequent configurations tasks get handled through the gitops toolchain.

## implementation

The controller acts on a secret. It first figures out if the secret is to be installed on the remote cluster, by checking if:
- annotation key `nephio.org/app` is equal to `configsync`
- annotation key `nephio.org/cluster-name` is not an empty string or `mgmt`

If the controller knows the secret is to be installed on the remote cluster, it finds the credentials of the remote cluster and the type of cluster based on the signatures of the secret (right now only cluster api is implemented, but the code is able to handle other implementations).
Once the remote credentials are found and the cluster is deemed ready, the secret get installed on the remote cluster after validating if the corresponding namespace exists

If any of the validation fail the controller will retry installing the secret.

At this stage the implementation is specific to `config-sync` but we aim to provide other gitops toolschains like `argo` and `flux`
Loading