Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for Admin Partitions #729

Merged
merged 9 commits into from
Sep 17, 2021
Merged

Add support for Admin Partitions #729

merged 9 commits into from
Sep 17, 2021

Conversation

thisisnotashwin
Copy link
Contributor

@thisisnotashwin thisisnotashwin commented Sep 16, 2021

Changes proposed in this PR:

  • Add ability to create Admin Partitions in both Server clusters and Workload clusters
  • Add Admin Partition support to Ingress and Terminating Gateways.
  • Create a Partition Service on the server cluster when Admin Partitions are enabled.
  • Add support for Partitions to Endpoints Controller and Connect Inject.
  • Update Config Entries with Admin Partitions

Update this PR with the alpha image for Consul 1.11 and ensure the acceptance tests are 馃煝
We will not update the values file with the above image as it is an alpha image.

How I've tested this PR:

  • Each commit has been individually tested as a PR.

How I expect reviewers to test this PR:

  • Please read the CHANGELOG and provide suggestions (SEND HELP)
  • Approve once the pipelines are 馃煝

Checklist:

  • Tests added
  • CHANGELOG entry added

    HashiCorp engineers only, community PRs should not add a changelog entry.
    Entries should use present tense (e.g. Add support for...)

* Create Admin Partitions when enabled
* Add support for Partition Service
- This is a loadBalancer service that allows one to expose the server pods. This allows a constant value for client auto-join even if the node IPs of the server pods change.
* Support NodePort along with LoadBalancer as service types for Partition service.
- This will allows interop with Kind. Additionally, users can set explicit NodePorts for Serf, HTTPS and RPC endpoints.
* Fail if agents in server cluster are created in a partition not
"default"

* Fail helm upgrades if partition name is updated.

- This does not cause the parition-init job from running or the agents
  from attempting to join the new partition, but the helm upgrade stays
in a failed state until an upgrade is run with the partition name
reverted.
* Add partition support to config entries

* Fail if namespaces are not enabled and admin partitions are enabled
@thisisnotashwin thisisnotashwin marked this pull request as ready for review September 17, 2021 13:11
@thisisnotashwin thisisnotashwin requested review from ishustava, a team and t-eckert and removed request for a team September 17, 2021 13:11
Copy link
Contributor

@t-eckert t-eckert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gif of flippin' sending it

Just flippin' send it, bro.

Copy link
Contributor

@ishustava ishustava left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! Haven't looked at the code since it's all been reviewed already, but left some suggestions/questions on the changelog.

CHANGELOG.md Outdated
* Add support for Admin Partitions. **(Consul Enterprise only)**
**ALPHA** [[GH-729](https://github.com/hashicorp/consul-k8s/pull/729)]

This feature allows Consul to be deployed across multiple Kubernetes clusters while sharing a single set of Consul
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this section needs to be intended so that it all looks like a part of the admin partitions bullet point.

CHANGELOG.md Outdated
**ALPHA** [[GH-729](https://github.com/hashicorp/consul-k8s/pull/729)]

This feature allows Consul to be deployed across multiple Kubernetes clusters while sharing a single set of Consul
servers. The services on each cluster can be independently managed. This feature in the alpha phase requires a flat pod
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

might be nice to put these requirements into a bulleted list so that it reads better.

CHANGELOG.md Outdated
tls:
enabled: true
image: hashicorp/consul-enterprise:1.11.0-ent-alpha
imageK8S: hashicorp/consul-k8s-control-plane:0.34.0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this image is probably not needed since it's the default, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point!! 馃槃

CHANGELOG.md Outdated

This feature allows Consul to be deployed across multiple Kubernetes clusters while sharing a single set of Consul
servers. The services on each cluster can be independently managed. This feature in the alpha phase requires a flat pod
and node network in order for inter-partition networking to work. This feature requires TLS to be enabled. This feature
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is TLS required?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We require HTTPS for communication to externalServers. We had that already as a requirement and I decided not to loosen it for AP.

CHANGELOG.md Outdated
kubectl get svc consul-consul-partition-service -o json | jq -r '.status.loadBalancer.ingress[0].ip'
```

Migrate the TLS certs from the server cluster to the workload clusters
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Migrate the TLS certs from the server cluster to the workload clusters
Migrate the TLS CA credentials from the server cluster to the workload clusters

CHANGELOG.md Outdated
kubectl get secret consul-consul-ca-cert --context "server-context" -o yaml | kubectl apply --context "workload-context" -f -
```

Configure the Workload cluster using the following config.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Configure the Workload cluster using the following config.
Configure the workload cluster using the following config.

CHANGELOG.md Outdated
```

This should lead to the workload cluster having only Consul agents that connect with the Consul server. Services in this
cluster behave like independent services. They can be configured to communicated with services in other partitions by
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
cluster behave like independent services. They can be configured to communicated with services in other partitions by
cluster behave like independent services. They can be configured to communicate with services in other partitions by

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants