From a85baf3f1ef102cad851fcc7c5638982939cccd9 Mon Sep 17 00:00:00 2001 From: boruszak Date: Thu, 13 Nov 2025 09:25:17 -0800 Subject: [PATCH 1/3] Restore page + update redirects --- content/consul/v1.22.x/content/docs/hcp.mdx | 338 ++++++++++++++++++ .../consul/v1.22.x/data/docs-nav-data.json | 4 + content/hcp-docs/redirects.jsonc | 7 +- 3 files changed, 348 insertions(+), 1 deletion(-) create mode 100644 content/consul/v1.22.x/content/docs/hcp.mdx diff --git a/content/consul/v1.22.x/content/docs/hcp.mdx b/content/consul/v1.22.x/content/docs/hcp.mdx new file mode 100644 index 0000000000..23da1202b5 --- /dev/null +++ b/content/consul/v1.22.x/content/docs/hcp.mdx @@ -0,0 +1,338 @@ +--- +page_title: HCP Consul Dedicated +description: |- + This topic provides an overview of HCP Consul Dedicated clusters and the process to migrate to self-managed Consul clusters. +--- + +# HCP Consul Dedicated + +This topic describes HCP Consul Dedicated, the networking software as a service (SaaS) product that was previously available through the HashiCorp Cloud Platform (HCP). HCP Consul Dedicated reached end-of-life on November 12, 2025. + +## Introduction + +HCP Consul dedicated was a service that provided simplified workflows for common Consul tasks and the option to have HashiCorp set up and manage your Consul servers for you. + +On November 12, 2025, HashiCorp ended operations and support for HCP Consul Dedicated clusters. As of this this date, you are no longer be able to deploy access, update, or manage Dedicated clusters. + +We recommend migrating HCP Consul Dedicated deployments to self-managed server clusters running Consul Enterprise. On virtual machines, this migration requires some downtime for the server cluster but enables continuity between existing configurations and operations. Downtime is not required on Kubernetes, although we suggest scheduling downtime to ensure the migration is successful. + +## Migration workflows + +The process to migrate a Dedicated cluster to a self-managed environment consists of the following steps, which change depending on whether your cluster runs on virtual machines (VMs) or Kubernetes. + +### VMs + +To migrate on VMs, complete the following steps: + +1. [Take a snapshot of the HCP Consul Dedicated cluster](#take-a-snapshot-of-the-hcp-consul-dedicated-cluster). +1. [Transfer the snapshot to a self-managed cluster](#transfer-the-snapshot-to-a-self-managed-cluster). +1. [Use the snapshot to restore the cluster in your self-managed environment](#use-the-snapshot-to-restore-the-cluster-in-your-self-managed-environment). +1. [Update the client configuration file to point to the new server](#update-the-client-configuration-file-to-point-to-the-new-server). +1. [Restart the client agent and verify that the migration was successful](#restart-the-client-agent-and-verify-that-the-migration-was-successful). +1. [Disconnect and decommission the HCP Consul Dedicated cluster and its supporting resources](#disconnect-supporting-resources-and-decommission-the-hcp-consul-dedicated-cluster). + +### Kubernetes + +To migrate on Kubernetes, complete the following steps: + +1. [Take a snapshot of the HCP Consul Dedicated cluster](#take-a-snapshot-of-the-hcp-consul-dedicated-cluster-1). +1. [Transfer the snapshot to a self-managed cluster](#transfer-the-snapshot-to-a-self-managed-cluster-1). +1. [Use the snapshot to restore the cluster in your self-managed environment](#use-the-snapshot-to-restore-the-cluster-in-your-self-managed-environment). +1. [Update the CoreDNS configuration](#update-the-coredns-configuration). +1. [Update the `values.yaml` file](#update-the-values-yaml-file). +1. [Upgrade the cluster](#upgrade-the-cluster). +1. [Redeploy workload applications](#redeploy-workload-applications). +1. [Switch the CoreDNS entry](#switch-the-coredns-entry). +1. [Verify that the migration was successful](#verify-that-the-migration-was-successful). +1. [Disconnect and decommission the HCP Consul Dedicated cluster and its supporting resources](#disconnect-and-decommission-the-hcp-consul-dedicated-cluster-and-its-supporting-resources). + +## Recommendations and best practices + +On VMs, the migration process requires a temporary outage that lasts from the time when you restore the snapshot on the self-managed cluster until the time when you restart client agents after updating their configuration. Downtime is not required on Kubernetes, although we suggest scheduling downtime to ensure the migration is successful. + +In addition, data written to the Dedicated server after the snapshot is created cannot be restored. + +To limit the duration of outages, we recommend using a dev environment to test the migration before fully migrating production workloads. The length of the outage depends on the number of clients, the self-managed environment, and the automated processes involved. + +Regardless of whether you use VMs or Kubernetes, we also recommend using [Consul maintenance mode](/consul/commands/maint) to schedule a period of inactivity to address unforeseen data loss or data sync issues that result from the migration. + +## Prerequisites + +The migration instructions on this page make the following assumptions about your existing infrastructure: + +- You already deployed an HCP Consul Dedicated server cluster and a self-managed server cluster with matching configurations. These configurations should include the following settings: + - Both clusters have 3 nodes. + - ACLs, TLS, and gossip encryption are enabled. +- You have command line access to both the Dedicated cluster and your self-managed cluster. +- You [generated an admin token for the Dedicated cluster](/hcp/docs/consul/dedicated/access#generate-admin-token) and exported it to the `CONSUL_HTTP_TOKEN` environment variable. Alternatively, add the `-token=` flag to CLI commands. +- The clusters have an existing VPC or peering connectivity connection. +- You already identified the client nodes affected by the migration. + +If you are migrating clusters on Kubernetes, refer to the [version compatibility matrix](/consul/docs/k8s/compatibility#compatibility-matrix) to ensure that you are using compatible versions of `consul` and `consul-k8s`. + +In addition, you must migrate to an Enterprise cluster, which requires an Enterprise license. Migrating to Community edition clusters is not possible. If you do not have access to a Consul Enterprise license, [file a support request to let us know](https://support.hashicorp.com/hc/en-us/requests/new). A member of the account team will reach out to assist you. + +## Migrate to self-managed on VMs + +To migrate to a self-managed Consul Enterprise cluster on VMs, [connect to the Dedicated cluster's current leader node](/hcp/docs/consul/dedicated/access) and then complete the following steps. + +### Take a snapshot of the HCP Consul Dedicated cluster + +A snapshot is a backup of your HCP Consul cluster’s state. Consul uses this snapshot to restore its previous state in the new self-managed environment. + +As of November 12, 2025, you cannot take a snapshot of an HCP Dedicated cluster. We will retain cluster snapshots for 30 days. [Contact HCP support](https://support.hashicorp.com/hc/en-us/requests/new) if you need help accessing your most recent snapshot. + +### Transfer the snapshot to a self-managed cluster + +Use a secure copy (SCP) command to move the snapshot file to the self-managed Consul cluster. + +```shell-session +$ scp /home/backup/hcp-cluster.snapshot @:/home/backup +``` + +### Use the snapshot to restore the cluster in your self-managed environment + +After you transfer the snapshot file to the self-managed node, restore the cluster’s state from the snapshot in your self-managed environment. + +Export the `CONSUL_HTTP_TOKEN` environment variable in your self-managed environment and then run the following command. + +```shell-session +$ consul snapshot restore /home/backup/hcp-cluster.snapshot +Restored snapshot +``` + +If you cannot use use environment variables, add the `-token=` flag to the command: + +```shell-session +$ consul snapshot restore /home/backup/hcp-cluster.snapshot -token=" +Restored snapshot +``` + +For more information on this command, refer to the [Consul CLI documentation](/consul/commands/snapshot/restore). + +### Update the client configuration file to point to the new server + +Modify the agent configuration on your Consul clients. You must update the following configuration values: + +- `retry_join` IP address +- TLS encryption +- ACL token + +You can use an existing certificate authority or create a new one in your self-managed cluster. For more information, refer to [Service mesh certificate authority overview in the Consul documentation](/consul/docs/connect/ca) + +The following example demonstrates a modified client configuration. + +```hcl +retry_join = [""] + +tls { + defaults { + auto_encrypt { + allow_tls =true + tls = true + } + verify_incoming = true + verify_outgoing = true + } +} + +acl { + enabled = true + default_policy = "deny" + enable_token_persistence = true + tokens { + agent = "" + } +} +``` + +For more information about configuring these fields, refer to the [agent configuration reference in the Consul documentation](/consul/docs/agent/config/config-files). + +### Restart the client agent and verify that the migration was successful + +Restart the client to apply the updated configuration and reconnect it to the new cluster. + +```shell-session +$ sudo systemctl restart consul +``` + +After you update and restart all of the client agents, check the catalog to ensure that clients migrated successfully. You can check the Consul UI or run the following CLI command. + +```shell-session +$ consul members +``` + +Run `consul members` on the Dedicated cluster as well. Ensure that all clients appear as `inactive` or `left`. + +### Disconnect supporting resources and decommission the HCP Consul Dedicated cluster + +After you confirm that your client agents successfully connected to the self-managed cluster, delete VPC peering connections and any other unused resources. If you use other HCP services, ensure that these resources are not currently in use. After you delete a peering connection or an HVN, it cannot be used by any HCP product. + +## Migrate to self-managed on Kubernetes + +To migrate to a self-managed Consul Enterprise cluster on Kubernetes, [connect to the Dedicated cluster's current leader node](/hcp/docs/consul/dedicated/access) and then complete the following steps. + +### Take a snapshot of the HCP Consul Dedicated cluster + +A snapshot is a backup of your HCP Consul cluster’s state. Consul uses this snapshot to restore its previous state in the new self-managed environment. + +As of November 12, 2025, you cannot take a snapshot of an HCP Dedicated cluster. We will retain cluster snapshots for 30 days. [Contact HCP support](https://support.hashicorp.com/hc/en-us/requests/new) if you need help accessing your most recent snapshot. + +### Transfer the snapshot to a self-managed cluster + +Use a secure copy (SCP) command to move the snapshot file to the self-managed Consul cluster. + +```shell-session +$ scp /home/backup/hcp-cluster.snapshot @:/home/backup +``` + +### Use the snapshot to restore the cluster in your self-managed environment + +After you transfer the snapshot file to the self-managed node, use the `kubectl exec` command to restore the cluster’s state in your self-managed Kubernetes environment. + +```shell-session +$ kubectl exec -c consul-server-0 -- consul snapshot restore /home/backup/hcp-cluster.snapshot +Restored snapshot +``` + +For more information on this command, refer to the [Consul CLI documentation](/consul/commands/snapshot/restore). + +### Update the CoreDNS configuration + +Update the CoreDNS configuration on your Kubernetes cluster to point to the Dedicated cluster's IP address. Make sure the configured hostname resolves correctly to cluster’s IP from inside a deployed pod. + + + +```yaml +Corefile: |- + .:53 { + errors + health { + lameduck 5s } + ready + kubernetes cluster.local in-addr.arpa ip6.arpa { + pods insecure + fallthrough in-addr.arpa ip6.arpa + ttl 30 + } + hosts { + 35.91.49.134 server.hcp-managed.consul + fallthrough + } + prometheus 0.0.0.0:9153 + forward . 8.8.8.8 8.8.4.4 /etc/resolv.conf + cache 30 + loop + reload + loadbalance + } +``` + + + +If there are issues when you attempt to resolve the hostname, check if the nameserver resolves to the `CLUSTER-IP` inside the pod. Run the following command to return the `CLUSTER-IP`. + +```shell-session + # k -n kube-system get svc + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + coredns ClusterIP 10.100.224.88 53/UDP,53/TCP 4h24m +``` + +### Update the `values.yaml` file + +Update the Helm configuration or `values.yaml` file for your self-managed cluster. You should update the following fields: + +- Update the server host value. Use the host name you added when you updated the CoreDNS configuration. +- Create a Kubernetes secret in the `consul` namespace with a new CA file created by adding the contents of all of the following CA files. Add the CA file contents of the new self managed server at the end. + - [https://letsencrypt.org/certs/isrg-root-x1-cross-signed.pem](https://letsencrypt.org/certs/isrg-root-x1-cross-signed.pem) + - [https://letsencrypt.org/certs/isrg-root-x2-cross-signed.pem](https://letsencrypt.org/certs/isrg-root-x2-cross-signed.pem) + - [https://letsencrypt.org/certs/2024/e5-cross.pem](https://letsencrypt.org/certs/2024/e5-cross.pem) + - [https://letsencrypt.org/certs/2024/e6-cross.pem](https://letsencrypt.org/certs/2024/e6-cross.pem) + - [https://letsencrypt.org/certs/2024/r10.pem](https://letsencrypt.org/certs/2024/r10.pem) + - [https://letsencrypt.org/certs/2024/r11.pem](https://letsencrypt.org/certs/2024/r11.pem) +- Update the `tlsServerName` field to the appropriate value. It is usually the hostname of the +managed cluster. If the value is not known, TLS verification fails when you apply this configuration and the error log lists possible values. +- Set `useSystemRoots` to `false` to use the new CA certs. + +For more information about configuring these fields, refer to the [Consul on Kubernetes Helm chart reference](/consul/docs/k8s/helm). + +### Upgrade the cluster + +After you update the `values.yaml` file, run the following command to update the self-managed Kubernetes cluster. + +```shell-session +$ consul-k8s upgrade -config-file=values.yaml +``` + +This command redeploys the Consul pods with the updated configurations. Although the CoreDNS installation still points to the Dedicated cluster, the pods have access to the new CA file. + +### Redeploy workload applications + +Redeploy all the workload applications so that the `init` containers run again and fetch the new CA file. After you redeploy the applications, run a `kubectl describe pod` command on any workload pod and verify the output resembles the following example. + + + +```shell-session +$ kubectl describe pod -l name="product-api-8cf8c8ccc-kvkk8" +Environment: + POD_NAME: product-api-8cf8c8ccc-kvkk8 (v1:metadata.name) + POD_NAMESPACE: default (v1:metadata.namespace) + NODE_NAME: (v1:spec.nodeName) + CONSUL_ADDRESSES: server.consul.one + CONSUL_GRPC_PORT: 8502 + CONSUL_HTTP_PORT: 443 + CONSUL_API_TIMEOUT: 5m0s + CONSUL_NODE_NAME: $(NODE_NAME)-virtual + CONSUL_USE_TLS: true + CONSUL_CACERT_PEM: -----BEGIN CERTIFICATE-----\r +MIIFYDCCBEigAwIBAgIQQAF3ITfU6UK47naqPGQKtzANBgkqhkiG9w0BAQsFADA/\r +MSQwIgYDVQQKExtEaWdpdGFsIFNpZ25hdHVyZSBUcnVzdCBDby4xFzAVBgNVBAMT\r +DkRTVCBSb290IENBIFgzMB4XDTIxMDEyMDE5MTQwM1oXDTI0MDkzMDE4MTQwM1ow\r +TzELMAkGA1UEBhMCVVMxKTAnBgNVBAoTIEludGVybmV0IFNlY3VyaXR5IFJlc2Vh\r +cmNoIEdyb3VwMRUwEwYDVQQDEwxJU1JHIFJvb3QgWDEwggIiMA0GCSqGSIb3DQEB\r +AQUAA4ICDwAwggIKAoICAQCt6CRz9BQ385ueK1coHIe+3LffOJCMbjzmV6B493XC +``` + + + +### Switch the CoreDNS entry + +Update the CoreDNS configuration with the self-managed server's IP address. + +If the `tlsServerName` of the self-managed cluster is different than the `tlsServerName` on the Dedicated cluster, you must update the field and re-run the `consul-k8s upgrade` command. For self-managed clusters, the `tlsServerName` usually take form of `server..consul`. + +### Verify that the migration was successful + +After you update the CoreDNS entry, check the Consul catalog to ensure that the migration was successful. You can check the Consul UI or run the following CLI command. + +```shell-session +$ kubectl exec -c consul-server-0 -- consul members +``` + +Run `consul members` on the Dedicated cluster as well. Ensure that all service nodes appear as `inactive` or `left`. + +### Disconnect and decommission the HCP Consul Dedicated cluster and its supporting resources + +After you confirm that your services successfully connected to the self-managed cluster, delete VPC peering connections and any other unused resources. If you use other HCP services, ensure that these resources are not currently in use. After you delete a peering connection or an HVN, it cannot be used by any HCP product. + +## Troubleshooting + +You might encounter errors when migrating from an HCP Consul Dedicated cluster to a self-managed Consul Enterprise cluster. + +### Troubleshoot on VMs + +If you encounter a `403 Permission Denied` error when you attempt to generate a new ACL bootstrap token, or if you misplace the bootstrap token, you can update the Raft index to reset the ACL system. Use the Raft index number included in the error output to write the reset index into the bootstrap reset file. You must run this command on the Leader node. + +The following example uses `13` as its Raft index: + +```shell-session +$ echo 13 >> consul.d/acl-bootstrap-reset +``` + +### Troubleshoot on Kubernetes + +If you encounter issues resolving the hostname, check if the nameserver does not match the `CLUSTER-IP`. One possible issue is that the `ClusterDNS` field points to an IP in the kubelet configuration that differs from the Kubernetes worker nodes. You should change the kubelet configuration to use the `CLUSTER-IP` and then restart the kubelet process on all nodes. + +## Support + +If have questions or need additional help when migrating to a self-managed Consul Enterprise cluster, [submit a request to our support team](https://support.hashicorp.com/hc/en-us/requests/new). diff --git a/content/consul/v1.22.x/data/docs-nav-data.json b/content/consul/v1.22.x/data/docs-nav-data.json index b0077a1ec5..4fec48ba7b 100644 --- a/content/consul/v1.22.x/data/docs-nav-data.json +++ b/content/consul/v1.22.x/data/docs-nav-data.json @@ -2552,6 +2552,10 @@ "path": "openshift" } ] + }, + { + "title": "HCP Consul Dedicated", + "path": "hcp" }, { "divider": true diff --git a/content/hcp-docs/redirects.jsonc b/content/hcp-docs/redirects.jsonc index 15a36f5491..38830f3ec4 100644 --- a/content/hcp-docs/redirects.jsonc +++ b/content/hcp-docs/redirects.jsonc @@ -624,7 +624,12 @@ }, { "source": "/hcp/docs/consul/:slug*", - "destination": "/hcp/docs/changelog#2025-11-12", + "destination": "/consul/docs/hcp", "permanent": true, + }, + { + "source": "/consul/docs/:version(v1\\.(?:18|19|20)\\.x)/hcp", + "destination": "/consul/docs/hcp", + "permanent": true } ] From ca3f9d1dbca7112705475e590063d8e507600668 Mon Sep 17 00:00:00 2001 From: boruszak Date: Thu, 13 Nov 2025 10:01:57 -0800 Subject: [PATCH 2/3] Small edits --- content/consul/v1.22.x/content/docs/hcp.mdx | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/content/consul/v1.22.x/content/docs/hcp.mdx b/content/consul/v1.22.x/content/docs/hcp.mdx index 23da1202b5..9b7e651256 100644 --- a/content/consul/v1.22.x/content/docs/hcp.mdx +++ b/content/consul/v1.22.x/content/docs/hcp.mdx @@ -6,7 +6,9 @@ description: |- # HCP Consul Dedicated -This topic describes HCP Consul Dedicated, the networking software as a service (SaaS) product that was previously available through the HashiCorp Cloud Platform (HCP). HCP Consul Dedicated reached end-of-life on November 12, 2025. +This topic describes HCP Consul Dedicated, the networking software as a service (SaaS) product that was previously available through the HashiCorp Cloud Platform (HCP). + +HCP Consul Dedicated reached end-of-life on November 12, 2025. ## Introduction @@ -46,7 +48,7 @@ To migrate on Kubernetes, complete the following steps: 1. [Verify that the migration was successful](#verify-that-the-migration-was-successful). 1. [Disconnect and decommission the HCP Consul Dedicated cluster and its supporting resources](#disconnect-and-decommission-the-hcp-consul-dedicated-cluster-and-its-supporting-resources). -## Recommendations and best practices +## Migration recommendations and best practices On VMs, the migration process requires a temporary outage that lasts from the time when you restore the snapshot on the self-managed cluster until the time when you restart client agents after updating their configuration. Downtime is not required on Kubernetes, although we suggest scheduling downtime to ensure the migration is successful. @@ -56,7 +58,7 @@ To limit the duration of outages, we recommend using a dev environment to test t Regardless of whether you use VMs or Kubernetes, we also recommend using [Consul maintenance mode](/consul/commands/maint) to schedule a period of inactivity to address unforeseen data loss or data sync issues that result from the migration. -## Prerequisites +## Migration prerequisites The migration instructions on this page make the following assumptions about your existing infrastructure: From a3cb8a899032fe83fc907d3be2391cd76f8bbefc Mon Sep 17 00:00:00 2001 From: boruszak Date: Thu, 13 Nov 2025 12:53:45 -0800 Subject: [PATCH 3/3] Updates from review --- content/consul/v1.22.x/content/docs/hcp.mdx | 44 +++++++++------------ 1 file changed, 19 insertions(+), 25 deletions(-) diff --git a/content/consul/v1.22.x/content/docs/hcp.mdx b/content/consul/v1.22.x/content/docs/hcp.mdx index 9b7e651256..9a7ddfdb67 100644 --- a/content/consul/v1.22.x/content/docs/hcp.mdx +++ b/content/consul/v1.22.x/content/docs/hcp.mdx @@ -20,13 +20,13 @@ We recommend migrating HCP Consul Dedicated deployments to self-managed server c ## Migration workflows -The process to migrate a Dedicated cluster to a self-managed environment consists of the following steps, which change depending on whether your cluster runs on virtual machines (VMs) or Kubernetes. +The process to migrate a Dedicated cluster to a self-managed environment consists of steps that depend on whether your cluster runs on virtual machines (VMs) or Kubernetes. ### VMs To migrate on VMs, complete the following steps: -1. [Take a snapshot of the HCP Consul Dedicated cluster](#take-a-snapshot-of-the-hcp-consul-dedicated-cluster). +1. [Retrieve a snapshot of your cluster](#retrieve-a-snapshot-of-your-cluster). 1. [Transfer the snapshot to a self-managed cluster](#transfer-the-snapshot-to-a-self-managed-cluster). 1. [Use the snapshot to restore the cluster in your self-managed environment](#use-the-snapshot-to-restore-the-cluster-in-your-self-managed-environment). 1. [Update the client configuration file to point to the new server](#update-the-client-configuration-file-to-point-to-the-new-server). @@ -37,7 +37,7 @@ To migrate on VMs, complete the following steps: To migrate on Kubernetes, complete the following steps: -1. [Take a snapshot of the HCP Consul Dedicated cluster](#take-a-snapshot-of-the-hcp-consul-dedicated-cluster-1). +1. [Retrieve a snapshot of the HCP Consul Dedicated cluster](#retrieve-a-snapshot-of-the-hcp-consul-dedicated-cluster-1). 1. [Transfer the snapshot to a self-managed cluster](#transfer-the-snapshot-to-a-self-managed-cluster-1). 1. [Use the snapshot to restore the cluster in your self-managed environment](#use-the-snapshot-to-restore-the-cluster-in-your-self-managed-environment). 1. [Update the CoreDNS configuration](#update-the-coredns-configuration). @@ -62,12 +62,10 @@ Regardless of whether you use VMs or Kubernetes, we also recommend using [Consul The migration instructions on this page make the following assumptions about your existing infrastructure: -- You already deployed an HCP Consul Dedicated server cluster and a self-managed server cluster with matching configurations. These configurations should include the following settings: +- Your previous HCP Consul Dedicated server cluster and current self-managed server cluster have matching configurations. These configurations should include the following settings: - Both clusters have 3 nodes. - ACLs, TLS, and gossip encryption are enabled. -- You have command line access to both the Dedicated cluster and your self-managed cluster. -- You [generated an admin token for the Dedicated cluster](/hcp/docs/consul/dedicated/access#generate-admin-token) and exported it to the `CONSUL_HTTP_TOKEN` environment variable. Alternatively, add the `-token=` flag to CLI commands. -- The clusters have an existing VPC or peering connectivity connection. +- You have command line access to your self-managed cluster. - You already identified the client nodes affected by the migration. If you are migrating clusters on Kubernetes, refer to the [version compatibility matrix](/consul/docs/k8s/compatibility#compatibility-matrix) to ensure that you are using compatible versions of `consul` and `consul-k8s`. @@ -76,9 +74,9 @@ In addition, you must migrate to an Enterprise cluster, which requires an Enterp ## Migrate to self-managed on VMs -To migrate to a self-managed Consul Enterprise cluster on VMs, [connect to the Dedicated cluster's current leader node](/hcp/docs/consul/dedicated/access) and then complete the following steps. +Complete the following steps to migrate to a self-managed Consul Enterprise cluster on VMs. -### Take a snapshot of the HCP Consul Dedicated cluster +### Retrieve a snapshot of your cluster A snapshot is a backup of your HCP Consul cluster’s state. Consul uses this snapshot to restore its previous state in the new self-managed environment. @@ -94,9 +92,9 @@ $ scp /home/backup/hcp-cluster.snapshot @:/home/backup ### Use the snapshot to restore the cluster in your self-managed environment -After you transfer the snapshot file to the self-managed node, restore the cluster’s state from the snapshot in your self-managed environment. +After you transfer the snapshot file to the self-managed node, you can restore the cluster’s state from the snapshot in your self-managed environment. -Export the `CONSUL_HTTP_TOKEN` environment variable in your self-managed environment and then run the following command. +Make sure the `CONSUL_HTTP_TOKEN` environment variable is set to the value of an ACL tokenin your self-managed environment. Then run the following command. ```shell-session $ consul snapshot restore /home/backup/hcp-cluster.snapshot @@ -164,17 +162,15 @@ After you update and restart all of the client agents, check the catalog to ensu $ consul members ``` -Run `consul members` on the Dedicated cluster as well. Ensure that all clients appear as `inactive` or `left`. - ### Disconnect supporting resources and decommission the HCP Consul Dedicated cluster After you confirm that your client agents successfully connected to the self-managed cluster, delete VPC peering connections and any other unused resources. If you use other HCP services, ensure that these resources are not currently in use. After you delete a peering connection or an HVN, it cannot be used by any HCP product. ## Migrate to self-managed on Kubernetes -To migrate to a self-managed Consul Enterprise cluster on Kubernetes, [connect to the Dedicated cluster's current leader node](/hcp/docs/consul/dedicated/access) and then complete the following steps. +Complete the following steps to migrate to a self-managed Consul Enterprise cluster on Kubernetes. -### Take a snapshot of the HCP Consul Dedicated cluster +### Retrieve a snapshot of the HCP Consul Dedicated cluster A snapshot is a backup of your HCP Consul cluster’s state. Consul uses this snapshot to restore its previous state in the new self-managed environment. @@ -197,7 +193,7 @@ $ kubectl exec -c consul-server-0 -- consul snapshot restore /home/backup/hcp-cl Restored snapshot ``` -For more information on this command, refer to the [Consul CLI documentation](/consul/commands/snapshot/restore). +For more information on the snapshot command, refer to the [Consul CLI documentation](/consul/commands/snapshot/restore). ### Update the CoreDNS configuration @@ -242,9 +238,9 @@ If there are issues when you attempt to resolve the hostname, check if the names ### Update the `values.yaml` file -Update the Helm configuration or `values.yaml` file for your self-managed cluster. You should update the following fields: +Update the Helm configuration or `values.yaml` file for your self-managed cluster. You should perform the following actions: -- Update the server host value. Use the host name you added when you updated the CoreDNS configuration. +- Update the `externalServers.host` value. Use the host name you added when you updated the CoreDNS configuration. - Create a Kubernetes secret in the `consul` namespace with a new CA file created by adding the contents of all of the following CA files. Add the CA file contents of the new self managed server at the end. - [https://letsencrypt.org/certs/isrg-root-x1-cross-signed.pem](https://letsencrypt.org/certs/isrg-root-x1-cross-signed.pem) - [https://letsencrypt.org/certs/isrg-root-x2-cross-signed.pem](https://letsencrypt.org/certs/isrg-root-x2-cross-signed.pem) @@ -252,15 +248,15 @@ Update the Helm configuration or `values.yaml` file for your self-managed cluste - [https://letsencrypt.org/certs/2024/e6-cross.pem](https://letsencrypt.org/certs/2024/e6-cross.pem) - [https://letsencrypt.org/certs/2024/r10.pem](https://letsencrypt.org/certs/2024/r10.pem) - [https://letsencrypt.org/certs/2024/r11.pem](https://letsencrypt.org/certs/2024/r11.pem) -- Update the `tlsServerName` field to the appropriate value. It is usually the hostname of the +- Update the `externalServers.tlsServerName` field to the appropriate value. It is usually the hostname of the managed cluster. If the value is not known, TLS verification fails when you apply this configuration and the error log lists possible values. -- Set `useSystemRoots` to `false` to use the new CA certs. +- Set `externalServers.useSystemRoots` to `false` to use the new CA certs. For more information about configuring these fields, refer to the [Consul on Kubernetes Helm chart reference](/consul/docs/k8s/helm). ### Upgrade the cluster -After you update the `values.yaml` file, run the following command to update the self-managed Kubernetes cluster. +After you update the `values.yaml` file, run the `consul-k8s upgrade` command to update the self-managed Kubernetes cluster. ```shell-session $ consul-k8s upgrade -config-file=values.yaml @@ -299,20 +295,18 @@ AQUAA4ICDwAwggIKAoICAQCt6CRz9BQ385ueK1coHIe+3LffOJCMbjzmV6B493XC ### Switch the CoreDNS entry -Update the CoreDNS configuration with the self-managed server's IP address. +Update the CoreDNS configuration to use the self-managed server's IP address. If the `tlsServerName` of the self-managed cluster is different than the `tlsServerName` on the Dedicated cluster, you must update the field and re-run the `consul-k8s upgrade` command. For self-managed clusters, the `tlsServerName` usually take form of `server..consul`. ### Verify that the migration was successful -After you update the CoreDNS entry, check the Consul catalog to ensure that the migration was successful. You can check the Consul UI or run the following CLI command. +After you update the CoreDNS entry, check the Consul catalog to ensure that the migration was successful. You can check the Consul UI or run the `kubectl exec` command. ```shell-session $ kubectl exec -c consul-server-0 -- consul members ``` -Run `consul members` on the Dedicated cluster as well. Ensure that all service nodes appear as `inactive` or `left`. - ### Disconnect and decommission the HCP Consul Dedicated cluster and its supporting resources After you confirm that your services successfully connected to the self-managed cluster, delete VPC peering connections and any other unused resources. If you use other HCP services, ensure that these resources are not currently in use. After you delete a peering connection or an HVN, it cannot be used by any HCP product.