Skip to content

Commit

Permalink
doc: migrate existing resource from a cluster to Karmada
Browse files Browse the repository at this point in the history
Signed-off-by: chaosi-zju <chaosi@zju.edu.cn>
  • Loading branch information
chaosi-zju committed Aug 15, 2023
1 parent 96cc20f commit 6404a4c
Show file tree
Hide file tree
Showing 5 changed files with 394 additions and 0 deletions.
97 changes: 97 additions & 0 deletions docs/administrator/migration/migrate-in-batch.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
---
title: Migrate In Batch
---

## Scenario

Assuming the user has a single kubernetes cluster which already has many native resource installed.

The user want to install Karmada for multi-cluster management, and hope to migrate the resource that already exist from original cluster to Karmada.
It is required that the pods already exist not be affected during the process of migration, which means the relevant container not be restarted.

So, how to migrate the existing resource?

![](../../resources/administrator/migrate-in-batch-1.jpg)

## Recommended migration strategy

If you only want to migrate individual resources, you can just refer to [promote-legacy-workload](./promote-legacy-workload) to do it one by one.

If you want to migrate a batch of resources, you are advised to take over all resources based on resource granularity through few `PropagationPolicy` at first,
then if you have more propagate demands based on application granularity, you can apply higher priority `PropagationPolicy` to preempt them.

Thus, how to take over all resources based on resource granularity? You can do as follows.

![](../../resources/administrator/migrate-in-batch-2.jpg)

### Step one

Since the existing resources will be token over by Karmada, there is no longer need to apply the related YAML config to member cluster.
That means, you can stop the corresponding operation or pipeline.

### Step two

Apply all the YAML config of resources to Karmada control plane, as the [ResourceTemplate](https://karmada.io/docs/core-concepts/concepts#resource-template) of Karmada.

### Step three

Edit a [PropagationPolicy](https://karmada.io/docs/core-concepts/concepts#propagation-policy), and apply it to Karmada control plane. You should pay attention to two fields:

* `spec.conflictResolution: Overwrite`**the value must be [Overwrite](https://github.com/karmada-io/karmada/blob/master/docs/proposals/migration/design-of-seamless-cluster-migration-scheme.md#proposal).**
* `spec.resourceSelectors`:defining which resources are selected to migrate

here we provide two examples:

#### Eg1. migrate all deployments

If you want to migrate all deployments from `member1` cluster to Karmada, you shall apply:

```yaml
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: deployments-pp
spec:
conflictResolution: Overwrite
placement:
clusterAffinity:
clusterNames:
- member1
priority: 0
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
schedulerName: default-scheduler
```

#### Eg2. migrate all services

If you want to migrate all services from `member1` cluster to Karmada, you shall apply:

```yaml
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: services-pp
spec:
conflictResolution: Overwrite
placement:
clusterAffinity:
clusterNames:
- member1
priority: 0
resourceSelectors:
- apiVersion: v1
kind: Service
schedulerName: default-scheduler
```

### Step four

The rest migration operations will be finished by Karmada automatically.

## PropagationPolicy Preemption and Demo

Besides, if you have more propagate demands based on application granularity, you can apply higher priority `PropagationPolicy`
to preempt those you applied in the migration mentioned above. Detail demo you can refer to the tutorial [Resource Migration](../../tutorials/resource-migration.md)

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
295 changes: 295 additions & 0 deletions docs/tutorials/resource-migration.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,295 @@
---
title: Resource Migration
---

## Objectives

Assuming you have a single kubernetes cluster which already has many native resource installed, furthermore,
you want to migrate the existing resource to Karmada and then achieve multi-cluster management.
So, this section will guide you to cover:

- Migrate all the existing resource from original cluster to Karmada based on resource granularity.
- Clone a full application related resources from original cluster to another member cluster.
- Expand an application to another member cluster but keep total replicas unchanged.

## Prerequisites

### Karmada with multi cluster has been installed

Before guide started, we should install at least three kubernetes clusters, one is for Karmada control plane, the other two for business clusters.

For convenience, we use [hack/local-up-karmada.sh](https://karmada.io/docs/installation/#install-karmada-for-development-environment) script to quickly prepare the above clusters.

```shell
➜ ✗ git clone https://github.com/karmada-io/karmada
➜ ✗ cd karmada
➜ ✗ hack/local-up-karmada.sh
➜ ✗ export KUBECONFIG=~/.kube/karmada.config:~/.kube/members.config
```

You will see Karmada control plane installed with multi members clusters.

### Enable PropagationPolicyPreemption in karmada-controller-manager

You can execute `kubectl --context karmada-host edit deploy karmada-controller-manager -n karmada-system` to check if
`--feature-gates=PropagationPolicyPreemption=true` feature gate exist in `spec.template.spec.containers[0].command` field.

If not, you shall add that feature gate parameter into the field. Or you can just execute `kubectl replace` like below:

```shell
➜ ✗ kubectl --context karmada-host get deploy karmada-controller-manager -n karmada-system -o yaml | sed '/- --failover-eviction-timeout=30s/{n;s/- --v=4/- --feature-gates=PropagationPolicyPreemption=true\n &/g}' | kubectl --context karmada-host replace -f -
```

### Preset resource in a member cluster

To simulate resources already exist in the member cluster, we apply two Deployments and Services to `member1` cluster by [/tmp/resources.yaml](#resourcesyaml)

```shell
➜ ✗ kubectl --context member1 apply -f /tmp/resources.yaml
deployment.apps/nginx-deploy created
service/nginx-svc created
deployment.apps/hello-deploy created
service/hello-svc created
```

Thus, we can use `member1` as the cluster with existing resources, while `member2` as a bare cluster.

## Tutorials

### Tutorial 1: migrate all the resources to Karmada

1)Apply [/tmp/resources.yaml](#resourcesyaml) to Karmada control plane too.

```shell
➜ ✗ kubectl --context karmada-apiserver apply -f /tmp/resources.yaml
deployment.apps/nginx-deploy created
service/nginx-svc created
deployment.apps/hello-deploy created
service/hello-svc created
```

2)Apply [/tmp/migrate-pp.yaml](#migrate-ppyaml) to Karmada control plane. You should pay attention to two fields:

* `spec.conflictResolution: Overwrite`:the value must be [Overwrite](https://github.com/karmada-io/karmada/blob/master/docs/proposals/migration/design-of-seamless-cluster-migration-scheme.md#proposal).
* `spec.resourceSelectors`:defining which resources are selected to migrate, you can define your custom [ResourceSelector](https://karmada.io/docs/userguide/scheduling/override-policy/#resource-selector).

```shell
➜ ✗ kubectl --context karmada-apiserver apply -f /tmp/migrate-pp.yaml
propagationpolicy.policy.karmada.io/migrate-pp created
```

Now, you have finished the migration, isn't it so easy? You can do the verification as:

```shell
➜ ✗ kubectl --context karmada-apiserver get deploy
➜ ✗ kubectl --context karmada-apiserver get rb
```

You can see the Deployments in Karmada are all ready and the `aggregatedStatus` of `ResourceBinding` is applied.

### Tutorial 2: clone application from member1 to member2 cluster

Apply [/tmp/nginx-pp.yaml](#nginx-ppyaml) to Karmada control plane.

```shell
➜ ✗ kubectl --context karmada-apiserver apply -f /tmp/nginx-pp.yaml
propagationpolicy.policy.karmada.io/nginx-pp created
```

Then you will find `nginx` application related resource are all copied to `member2` cluster:

```shell
➜ ✗ kubectl --context member2 get deploy -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
nginx-deploy 2/2 2 2 5m24s nginx nginx:latest app=nginx
➜ ✗ kubectl --context member2 get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
nginx-svc NodePort 10.13.161.255 <none> 80:30000/TCP 54s app=nginx
...
```

### Tutorial 3: expand application to member2 with sum replicas number unchanged

Apply [/tmp/hello-pp.yaml](#hello-ppyaml) to Karmada control plane.

```yaml
➜ ✗ kubectl --context karmada-apiserver apply -f /tmp/hello-pp.yaml
propagationpolicy.policy.karmada.io/hello-pp created
```

Then you will find `hello` application related resource are expand to `member2` cluster:

```shell
➜ ✗ kubectl --context member1 get deploy -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
hello-deploy 1/1 1 1 5m51s nginx nginx:latest app=hello
...
➜ ✗ kubectl --context member2 get deploy -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
hello-deploy 1/1 1 1 2m28s nginx nginx:latest app=hello
...
➜ ✗ kubectl --context member2 get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
hello-svc NodePort 10.13.210.81 <none> 8080:30080/TCP 2m51s app=hello
...
```

You can see the total replicas number of `hello-deploy` in all clusters sum to 2, which is as defined in the raw Deployment.

Besides, from `AGE` of corresponding `Pod`, you will find existing pod didn't restart.

## Appendix

### resources.yaml

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deploy
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
spec:
selector:
app: nginx
type: NodePort
ports:
- port: 80
nodePort: 30000
targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deploy
spec:
selector:
matchLabels:
app: hello
replicas: 2
template:
metadata:
labels:
app: hello
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: hello-svc
spec:
selector:
app: hello
type: NodePort
ports:
- port: 8080
nodePort: 30080
targetPort: 8080
```

### migrate-pp.yaml

```yaml
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: migrate-pp
spec:
conflictResolution: Overwrite
placement:
clusterAffinity:
clusterNames:
- member1
priority: 0
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
- apiVersion: v1
kind: Service
schedulerName: default-scheduler
```

### nginx-pp.yaml

```yaml
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: nginx-pp
spec:
conflictResolution: Overwrite
placement:
clusterAffinity:
clusterNames:
- member1
- member2 ## focus on this line
priority: 10
preemption: Always ## focus on this line
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
name: nginx-deploy
- apiVersion: v1
kind: Service
name: nginx-svc
schedulerName: default-scheduler
```

### hello-pp.yaml

```yaml
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: hello-pp
spec:
conflictResolution: Overwrite
placement:
replicaScheduling: ## focus on these lines
replicaDivisionPreference: Weighted
replicaSchedulingType: Divided
weightPreference:
staticWeightList:
- targetCluster:
clusterNames:
- member1
- member2
weight: 1
clusterAffinity:
clusterNames:
- member1
- member2
priority: 10
preemption: Always ## focus on this line
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
name: hello-deploy
- apiVersion: v1
kind: Service
name: hello-svc
schedulerName: default-scheduler
```
Loading

0 comments on commit 6404a4c

Please sign in to comment.