Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use direct cluster ingress #94

Merged

Conversation

sjenning
Copy link
Contributor

@sjenning sjenning commented Mar 9, 2021

Remaining issues:

Reviewer notes:

  • Now cluster name is set at the create infra stage, not create cluster stage. There is a new --base-domain and --name flag on create infra. --base-domain is required while --name will default to example same as create cluster.
  • create infra will look up the public zone ID based on the base-domain and will lookup/create the private zone based on the name and base-domain

@openshift-ci-robot openshift-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Mar 9, 2021
@openshift-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: sjenning

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 9, 2021
@sjenning sjenning force-pushed the direct-cluster-ingress branch 3 times, most recently from d9a61f0 to efe7925 Compare March 9, 2021 15:39
@sjenning
Copy link
Contributor Author

sjenning commented Mar 9, 2021

This PR is blocked by two things:

  • Guest cluster instances not added to ELB for Service type LoadBalancer
  • ingress-operator doesn't have AWS creds for accessing ELB/Router53

refs
https://github.com/openshift/cluster-ingress-operator/blob/630c1cf24a46da73f2dca4f06899b451cf190987/manifests/00-ingress-credentials-request.yaml#L22-L25
https://github.com/openshift/cluster-ingress-operator/blob/630c1cf24a46da73f2dca4f06899b451cf190987/manifests/02-deployment.yaml#L108-L115

@@ -35,6 +35,20 @@ type HostedClusterSpec struct {

// InfraID is used to identify the cluster in cloud platforms
InfraID string `json:"infraID,omitempty"`

// DNS configuration for the cluster
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: isnt this for the ingress controller, not the cluster itself?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@derekwaynecarr It is for the ingress controller. Which makes me think, should we allow you to specify an arbitrary list of ingress controllers like Hive does?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. If this is for ingress, and if there are multiple ingresses configured (as @csrwng asks), does that mean each ingress's DNS zone is defined independently?
  2. What fields are mutable? What is the behavior if they are mutated?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this just configures the default ingress controller

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we reflect this expectation either in a comment or the API field name?

@openshift-ci-robot openshift-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 13, 2021
@openshift-ci-robot openshift-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 18, 2021
@@ -174,7 +180,7 @@ func CreateCluster(ctx context.Context, opts Options) error {

exampleObjects := apifixtures.ExampleOptions{
Namespace: opts.Namespace,
Name: opts.Name,
Name: infra.Name,
Copy link
Member

@enxebre enxebre Mar 30, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just to clarify, Is it a hard requirement for the cluster.Name to be the same than the infra.Name? or is it only for the BaseDomain and the infraID?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cluster Name is independent the InfraID. The Name and BaseDomain are used to determine the DNS private zone.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not clear to me still why ExampleOptions.Name is now being changed to infra.Name. Isn't ExampleOptions.Name supposed to equal the HostedCluster name (represented here by opts.Name)?

}

cmd.Flags().StringVar(&opts.InfraID, "infra-id", opts.InfraID, "Cluster ID with which to tag AWS resources (required)")
cmd.Flags().StringVar(&opts.AWSCredentialsFile, "aws-creds", opts.AWSCredentialsFile, "Path to an AWS credentials file (required)")
cmd.Flags().StringVar(&opts.OutputFile, "output-file", opts.OutputFile, "Path to file that will contain output information from infra resources (optional)")
cmd.Flags().StringVar(&opts.Region, "region", opts.Region, "Region where cluster infra should be created")
cmd.Flags().StringSliceVar(&opts.AdditionalTags, "additional-tags", opts.AdditionalTags, "Additional tags to set on AWS resources")
cmd.Flags().StringVar(&opts.Name, "name", opts.Name, "A name for the cluster")
Copy link
Member

@enxebre enxebre Mar 30, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

may be clarify what a name for the cluster means? e.g "this is added as a subdomain when creating ingress resources"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the same flag you would pass to create cluster and I copied the description from there hoping people would understand they are the same thing. While we only use the name for DNS at create infra time, we use it for other things at create cluster time e.g. determining the namespace for the HCP.

awsConfig := &aws.Config{
// Route53 is weird about regions
// https://github.com/openshift/cluster-ingress-operator/blob/5660b43d66bd63bbe2dcb45fb40df98d8d91347e/pkg/dns/aws/dns.go#L163-L169
Region: aws.String("us-east-1"),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😒

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

indeed 😦


func (o *CreateInfraOptions) CreatePrivateZone(client route53iface.Route53API, vpcID string) (string, error) {
name := fmt.Sprintf("%s.%s", o.Name, o.BaseDomain)
id, err := lookupZone(client, name, true)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: can inline the if sentence here since variables are only used within its context

@@ -85,6 +88,7 @@ func NewCreateCommand() *cobra.Command {
cmd.Flags().StringVar(&opts.Region, "region", opts.Region, "Region to use for AWS infrastructure.")
cmd.Flags().StringVar(&opts.InfraID, "infra-id", opts.InfraID, "Infrastructure ID to use for AWS resources.")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

orthogonal to this PR, we should clarify this must be unique for the each cluster living in the same aws account


// DNSSpec specifies the DNS configuration in the cluster
type DNSSpec struct {
// BaseDomain is the base domain of the cluster.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to figure out how to specify this unambiguously.

  • What exactly does it represent? e.g.
    • The canonical domain suffix for any generated records for this cluster?
    • The suffix of a computed canonical name (e.g. name+basedomain)?
  • Is it a canonical FQDN? If not, how is the canonical value computed?
    • The current implementation seems to imply this is not a canonical FQDN
  • Must BaseDomain be a subdomain of the domain represented by PublicZoneId/PrivateZoneID?
  • What are the uniqueness constraints?
  • If there are uniqueness constraints, it's implied the value on spec is not usable absent some sort of async validation process
    • Where is that validation result surfaced on status?
    • What field will our controllers read given the spec value isn't necessarily valid?
    • Does this imply some sort of valid or admitted condition for HostedClusters generally the API should describe and the operator must honor?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If ingress is considered required, I'm not entirely sure how to separate the notion this is configuration for "the ingress controller" and not for the cluster itself (https://github.com/openshift/hypershift/pull/94/files#r591603025). Seems like a distinction without a difference?

// PublicZoneID is the Hosted Zone ID where all the DNS records that are publicly accessible to
// the internet exist.
// +optional
PublicZoneID string `json:"publicZoneID,omitempty"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Along the lines of the above, what doe these represent? e.g. Are they assumed to be authoritative for BaseDomain?

@sjenning sjenning force-pushed the direct-cluster-ingress branch 2 times, most recently from f47a3d5 to 84df1b8 Compare March 31, 2021 16:58
@sjenning sjenning changed the title WIP: use direct cluster ingress Use direct cluster ingress Mar 31, 2021
@openshift-ci-robot openshift-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Mar 31, 2021
@sjenning
Copy link
Contributor Author

/test e2e-aws

@sjenning
Copy link
Contributor Author

/test e2e-aws

@sjenning
Copy link
Contributor Author

/test e2e-aws

@csrwng
Copy link
Contributor

csrwng commented Mar 31, 2021

tagging and will address pending issues as a follow up
/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Mar 31, 2021
@openshift-merge-robot openshift-merge-robot merged commit c166356 into openshift:main Mar 31, 2021
@sjenning sjenning deleted the direct-cluster-ingress branch April 1, 2021 16:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants