|
| 1 | +--- |
| 2 | +meta: |
| 3 | + title: Migrating ENT1 pools to POP2 in your Kubernetes cluster |
| 4 | + description: A step-by-step guide to transitioning from ENT1 to POP2 Instances in Scaleway's Kubernetes Kapsule clusters, ensuring minimal disruption and optimal performance. |
| 5 | +content: |
| 6 | + h1: Migrating ENT1 pools to POP2 in your Kubernetes cluster |
| 7 | + paragraph: A step-by-step guide to transitioning from ENT1 to POP2 Instances in Scaleway's Kubernetes Kapsule clusters, ensuring minimal disruption and optimal performance. |
| 8 | +tags: kubernetes kapsule pop2 transition |
| 9 | +dates: |
| 10 | + validation: 2025-01-24 |
| 11 | + posted: 2025-01-24 |
| 12 | +categories: |
| 13 | + - containers |
| 14 | +--- |
| 15 | + |
| 16 | +Scaleway is deprecating [production-optimized **ENT1** Instances](/instances/reference-content/production-optimized/). |
| 17 | +This guide provides a step-by-step process to migrate from **ENT1** Instances to **POP2** Instances within your Scaleway Kubernetes Kapsule clusters. |
| 18 | + |
| 19 | +<Macro id="requirements" /> |
| 20 | + |
| 21 | +- A Scaleway account logged into the [Scaleway console](https://console.scaleway.com) |
| 22 | +- [Owner](/iam/concepts/#owner) status or [IAM permissions](/iam/concepts/#permission) allowing actions in the intended Organization |
| 23 | +- [Created](/kubernetes/how-to/create-cluster) a Kubernetes Kapsule or Kosmos cluster |
| 24 | + |
| 25 | +## Identifying your ENT1 pools |
| 26 | + |
| 27 | +1. Log in to the [Scaleway Console](https://console.scaleway.com). |
| 28 | +2. Navigate to **Kubernetes** under the **Containers** section in the side menu of the console. |
| 29 | +3. Select the cluster containing the ENT1 pools you intend to migrate. |
| 30 | +4. In the **Pools** tab, identify and note the pools using **ENT1** Instances. |
| 31 | + |
| 32 | +## Creating equivalent POP2 pools |
| 33 | + |
| 34 | +1. For each ENT1 pool identified: |
| 35 | + - Click **+ Create pool** (or **Add pool**). |
| 36 | + - Select **POP2** from the **Node Type** dropdown menu. |
| 37 | + - Configure the pool settings (e.g., Availability Zone, size, autoscaling, autoheal) to mirror the existing ENT1 pool as closely as possible. |
| 38 | + - Click **Create** (or **Add pool**) to initiate the new pool. |
| 39 | + |
| 40 | +2. Monitor the status of the new POP2 nodes until they reach the **Ready** state: |
| 41 | + - In the **Pools** tab of the console. |
| 42 | + - Alternatively, use `kubectl` with the command: |
| 43 | + ``` |
| 44 | + kubectl get nodes |
| 45 | + ``` |
| 46 | + Ensure all POP2 nodes display a **Ready** status. |
| 47 | + |
| 48 | +<Message type="tip"> |
| 49 | + It is recommended to perform these steps during a maintenance window or periods of low traffic to minimize potential disruptions. |
| 50 | +</Message> |
| 51 | + |
| 52 | +## Verifying workloads on the new pool |
| 53 | + |
| 54 | +1. [**Cordon**](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_cordon/) the ENT1 nodes to prevent them from accepting new pods: |
| 55 | + ``` |
| 56 | + kubectl cordon <your-ent1-node-name> |
| 57 | + ``` |
| 58 | + |
| 59 | +2. Drain the ENT1 nodes to reschedule workloads onto the POP2 nodes: |
| 60 | + ``` |
| 61 | + kubectl drain <your-ent1-node-name> --ignore-daemonsets --delete-emptydir-data |
| 62 | + ``` |
| 63 | + <Message type="note"> |
| 64 | + The flags `--ignore-daemonsets` and `--delete-emptydir-data` may be necessary depending on your environment. Refer to the official [Kubernetes documentation](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#drain) for detailed information on these options. |
| 65 | + </Message> |
| 66 | + |
| 67 | +These commands ensure that your workloads are running on the new POP2 nodes before proceeding to delete the ENT1 pool. |
| 68 | + |
| 69 | +## Deleting the ENT1 pool |
| 70 | + |
| 71 | +1. Return to your cluster’s **Pools** tab and wait a few minutes to ensure all workloads have been rescheduled onto POP2 nodes. |
| 72 | +2. Click the **three-dot menu** next to the ENT1 pool. |
| 73 | +3. Select **Delete pool**. |
| 74 | +4. Confirm the deletion. |
| 75 | + |
| 76 | +## Verifying the migration |
| 77 | + |
| 78 | +1. Run the following command to ensure no ENT1-based nodes remain: |
| 79 | + ``` |
| 80 | + kubectl get nodes |
| 81 | + ``` |
| 82 | + <Message type="note"> |
| 83 | + Only **POP2** nodes should be listed. |
| 84 | + </Message> |
| 85 | + |
| 86 | +2. Test your applications to confirm they are functioning correctly on the new POP2 nodes. |
| 87 | + |
| 88 | +### Migration Highlights |
| 89 | + |
| 90 | +- **Minimal disruption:** Kubernetes manages pod eviction and rescheduling automatically. However, the level of disruption may vary based on your specific workloads and setup. It is recommended to maintain multiple replicas of your services, set up [Pod Disruption Budgets (PDBs)](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) to minimize downtime, and scale up workloads prior to the upgrade. |
| 91 | +- **Flexible scaling:** You can configure the same autoscaling and autoheal policies on your POP2 pools as were set on your ENT1 pools. |
| 92 | +- **Equivalent performance:** In most scenarios, POP2 Instances surpass the performance of ENT1 Instances, with additional CPU and memory-optimized variants available. |
| 93 | + |
| 94 | +<Message type="tip"> |
| 95 | + If you require assistance during the transitioning process, please [contact our Support team](https://console.scaleway.com/support/tickets). |
| 96 | +</Message> |
0 commit comments