-
Notifications
You must be signed in to change notification settings - Fork 155
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deploy velero into user clusters. #12827
Deploy velero into user clusters. #12827
Conversation
e9f75dd
to
8f6c5fe
Compare
7d016ef
to
6a458d2
Compare
@@ -0,0 +1,122 @@ | |||
apiVersion: apiextensions.k8s.io/v1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where do these CRDs come from? How are they kept up-to-date? For cert-manager and the Velero chart, we have scripts in hack/
to fetch them. These however.. ?
A script could also add a comment to these files, like https://github.com/kubermatic/kubermatic/blob/main/charts/backup/velero/crd/backuprepositories.yaml#L1, then you don't need an extra exclusion rule on the boilerplate checker anymore and the next guy after you will know how to update the CRDs.
pkg/resources/resources.go
Outdated
ClusterbackupKubeconfigSecretName = "velero-kubeconfig" | ||
ClusterbackupUsername = "velero" | ||
ClusterBackupServiceAccountName = "velero" | ||
ClusterBackupNamespaceName = "velero" | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some are Clusterbackup and some are ClusterBackup? Why?
pkg/util/cluster/backup-config.go
Outdated
@@ -0,0 +1,91 @@ | |||
/* | |||
Copyright 2022 The Kubermatic Kubernetes Platform contributors. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please update the year to 2023.
pkg/util/cluster/backup-config.go
Outdated
destinations := seed.Spec.EtcdBackupRestore.Destinations | ||
defaultDestination := seed.Spec.EtcdBackupRestore.DefaultDestination | ||
if len(destinations) == 0 || defaultDestination == "" { | ||
log.Infof("seed [%s] has no backup destinations or no default backup destinations defined. Skipping cluster backup config for cluster [%s]", seed.Name, cluster.Name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if we really want to log here. This function might be called many times when reconciling and this could spam the controller's log. Unconditional logs, especially more than debug statements, can quickly become a pain in the rear.
Also, if we're logging, we're doing structured logging (Infow), not printf-style logging (Infof).
pkg/util/cluster/backup-config.go
Outdated
if dest.BucketName == "" || dest.Endpoint == "" || dest.Credentials == nil { | ||
return nil, fmt.Errorf("failed to validate backup destination configuration: bucketName, endpoint or credentials are not valid") | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should [also] be validated in the seed validation webhook.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will be removed in the next PR, as we need a dedicated CR for the backup destinations.
pkg/util/cluster/backup-config.go
Outdated
return nil, fmt.Errorf("failed to validate backup destination configuration: bucketName, endpoint or credentials are not valid") | ||
} | ||
return &resources.ClusterBackupConfig{ | ||
Enabled: cluster.Spec.Features[kubermaticv1.ClusterFeatureUserClusterBackup], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You checked this earlier, you could just return Enabled: true
here.
pkg/util/cluster/backup-config.go
Outdated
func extractClusterSeedName(clusterName, clusterURL string) (string, error) { | ||
u, err := url.Parse(clusterURL) | ||
if err != nil { | ||
return "", fmt.Errorf("failed to parse cluster URL: %w", err) | ||
} | ||
parts := strings.Split(u.Host, ".") | ||
if len(parts) < 4 || clusterName != parts[0] { // at least a cluster name, seed name and a base domain. | ||
return "", fmt.Errorf("invalid cluster URL: %s", u.Host) | ||
} | ||
return parts[1], nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not allowed. The DNS name for seeds can be overwritten and we must never attempt to deduce anything from it. If you need the Seed name, you need the Seed object's real (Kubernetes) name, not some URL.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
refactored.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RC
Also, with our cool new hip Application feature, why don't we install Velero using that? We already use it for Cilium (i.e. critical components). -- is it because of the "spli installation", where half of Velero runs on the seed and the rest in the user-cluster, and apps are only installed into user-clusters? Yeah that's probably it. Scratch that question. |
@xrstf Thank you for the review! If you could please take another look! |
e900db8
to
934bb2e
Compare
934bb2e
to
3d4f6c2
Compare
/approve |
LGTM label has been added. Git tree hash: a2a94a0febf55de52f7a7302fe825179d5319cfa
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: xrstf The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest Review the full test history Silence the bot with an |
1 similar comment
/retest Review the full test history Silence the bot with an |
What this PR does / why we need it:
Which issue(s) this PR fixes:
Fixes #12828
What type of PR is this?
/kind feature
This PR is the first part of the velero user-cluster integration. It adds an EE controller that mainly deploys necessary components to automatically deploy velero in the user cluster scope.
The following components are deployed in the user-cluster:
The following components are deployed on the seed cluster, in the user-cluster namespace:
Special notes for your reviewer:
Does this PR introduce a user-facing change? Then add your Release Note here:
Documentation: