-
Notifications
You must be signed in to change notification settings - Fork 298
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AWS: Store the OIDC documents in a S3 bucket #636
AWS: Store the OIDC documents in a S3 bucket #636
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: alvaroaleman The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
c94041f
to
efe5db7
Compare
cmd/infra/aws/create_iam.go
Outdated
@@ -63,12 +63,16 @@ func NewCreateIAMCommand() *cobra.Command { | |||
|
|||
cmd.Flags().StringVar(&opts.AWSCredentialsFile, "aws-creds", opts.AWSCredentialsFile, "Path to an AWS credentials file (required)") | |||
cmd.Flags().StringVar(&opts.InfraID, "infra-id", opts.InfraID, "Infrastructure ID to use for AWS resources.") | |||
cmd.Flags().StringVar(&opts.OIDCBucketName, "oidc-bucket-name", "", "The name of the bucket in which the OIDC discovery document is stored") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we be opinionated about this using the infra-id?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand what you mean, can you elaborate?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nevermind, we are sharing the bucket among HCs. before, we had a bucket per HC and thus could just generate the bucket name and have an opinionated path within the bucket, thus nothing was required from the user.
efe5db7
to
ecd3d53
Compare
/retest |
@@ -979,6 +995,11 @@ func (r *HostedClusterReconciler) Reconcile(ctx context.Context, req ctrl.Reques | |||
return ctrl.Result{}, fmt.Errorf("failed to reconcile machine approver: %w", err) | |||
} | |||
|
|||
// Reconcile the AWS OIDC discovery |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see the func returns nil if hcluster.Spec.Platform.Type is not aws but (at least for now) we might want to be consistent with the other platform specific reconcilers and gate the call with
switch hcluster.Spec.Platform.Type {
case hyperv1.AWSPlatform:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated it, although I find it slightly silly to have a one-entry switch statement
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeh I agree, though I think being consistent with the pattern would make adding more platforms and refactor easier
@@ -558,3 +580,20 @@ func (r HypershiftRecordingRule) Build() *prometheusoperatorv1.PrometheusRule { | |||
rule.Spec = recordingRuleSpec() | |||
return rule | |||
} | |||
|
|||
func AWSOIDCBucketConfigMap(bucketName, bucketRegion string) *corev1.ConfigMap { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did I get it right this is putting all clusters config into the management cluster kube-public
ns?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, it is a configmap that contains the name and the region of the bucket that contains the OIDC docs. install iam
needs this to be able to set up the OIDC provider
} | ||
|
||
if r.AWSOIDCBucketName == "" || r.S3Client == nil { | ||
return errors.New("hypershift wasn't configured with a S3 bucket or credentials, this makes it unable to set up OIDC for AWS clusters. Please install hypershift with the --aws-oidc-bucket-name, --aws-oidc-bucket-region and --aws-creds flags set. The bucket must pre-exist and the credentials must be authorized to write into it") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs to be added as a Prerequisites
to the Readme
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I updated the readme with instructions on the bucket and how hypershift install
needs to be parameterized now
hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go
Show resolved
Hide resolved
hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go
Outdated
Show resolved
Hide resolved
@@ -2822,6 +2843,10 @@ func (r *HostedClusterReconciler) delete(ctx context.Context, req ctrl.Request, | |||
if err := r.Delete(ctx, ns); err != nil && !apierrors.IsNotFound(err) { | |||
return false, fmt.Errorf("failed to delete namespace: %w", err) | |||
} | |||
|
|||
if err := r.cleanupOIDCBucketData(ctx, hc); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This call should be gated by
switch hcluster.Spec.Platform.Type {
case hyperv1.AWSPlatform:
right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It internally gates by checking for the finalizer and doing nothing if it is absent
ecd3d53
to
bd6ebff
Compare
/retest |
1fa8e1f
to
261f57c
Compare
The discussion this morning got me thinking, this breaks a management cluster on platform None supporting HCs for platform AWS. You will still be required to have a management account in which to create the bucket, even though the management cluster doesn't run there. Is this correct? |
Yes, the management account with the Bucket is needed. However, it is not needed to actually run in AWS, so I would argue that this doesn't break the ability of a management cluster on platform none to run AWS clusters. |
261f57c
to
e7c0836
Compare
e7c0836
to
509e2fa
Compare
/retest |
509e2fa
to
27602d9
Compare
15c02ae
to
dc76dbf
Compare
Yes, just as long as you can set |
dc76dbf
to
72fb29a
Compare
b5c888d
to
dfe8e50
Compare
dfe8e50
to
962d485
Compare
2388abf
to
4561986
Compare
|
d193808
to
1460741
Compare
/lgtm |
This change makes hypershift store the OIDC documents in a S3 bucket rather than serving them from a dedicated ingress pointing to the kube-apiserver. This is needed for two reasons: 1. Private clusters should not be publicly accessible 2. Tests (and probably more) assume that the serving cert of the endpoint serving these documents is either publicly trusted or signed by the guest clusters root ca This implies that a hypershift setup that should manage AWS clusters must be configured with a pre-existing S3 bucket and credentials to write and delete objects in that bucket. In detail, the following things were changed: * Hypershift install got optional arguments for a OIDC bucket and region * Hypershift install will always create a configmap with the data from these arguments in the kube-public namespace, although it might be empty if the flags were unset * If the flags were set, the hypershift operator will be configured with flags for these settings * AWS Create IAM got optional flags for the bucket region and name that will get defaulted to the values from the configmap in kube-public * AWS Create IAM will set the Issuer URL of the AWS OIDC provider to the S3 Bucket url/$infra-id * The hypershift operator will for AWS clusters download /.well-known/openid-configuration and /openid/v1/jwk from the kube-apiserver and place them in the S3 bucket below $infra_id * Once it did that, it sets an additional finalizer on the hostedcluster * On cleanup, it will remove the two documents again from the bucket and then remove the finalizer * The controlplaneoperator will not create an OIDC route anymore as we don't need it I've verified that this change both doesn't break the AWS OIDC provider and fixes the `[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer` conformance testcase.
1460741
to
5cadb9c
Compare
/lgtm |
This change makes hypershift store the OIDC documents in a S3 bucket
rather than serving them from a dedicated ingress pointing to the
kube-apiserver. This is needed for two reasons:
endpoint serving these documents is either publicly trusted or signed
by the guest clusters root ca
This implies that a hypershift setup that should manage AWS clusters
must be configured with a pre-existing S3 bucket and credentials to
write and delete objects in that bucket.
In detail, the following things were changed:
these arguments in the kube-public namespace, although it might be
empty if the flags were unset
flags for these settings
will get defaulted to the values from the configmap in kube-public
S3 Bucket url/$infra-id
/.well-known/openid-configuration and /openid/v1/jwk from the
kube-apiserver and place them in the S3 bucket below $infra_id
then remove the finalizer
don't need it
I've verified that this change both doesn't break the AWS OIDC provider
and fixes the
[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer
conformance testcase.
Ref https://issues.redhat.com/browse/HOSTEDCP-257
/cc @csrwng @sjenning