Skip to content

Latest commit

 

History

History
144 lines (119 loc) · 5.86 KB

8.md

File metadata and controls

144 lines (119 loc) · 5.86 KB

Application Portability

On occasion it may required to move an application off of a cluster or ensure that no traffic is routed to the cluster. To do this we will modify the deployment values within the deployment overlays that were used when we created the pacman application in the last lab.

Before we begin we will validate that every cluster is running the pacman application.

# The for loop will check for the Deployment objects in each cluster
for cluster in cluster1 cluster2 cluster3;do echo "*** $cluster ***"; oc get deployment --context $cluster -n pacman;done

*** cluster1 ***
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
pacman   1/1     0            0           6m21s
*** cluster2 ***
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
pacman   1/1     1            1           6m21s
*** cluster3 ***
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
pacman   1/1     0            0           6m21s

We would first like to ensure that cluster1 does not have any replicas running at all. To do this we will modify the overlay file for the pacman deployment. We will use directory from the previous labs to make modifications to the application.

# Browse to the directory 
cd ~/federation-dev/labs/lab-7-assets
# Modify the replica count in cluster1
sed -i 's/replicas: 1/replicas: 0/g' overlays/cluster1/pacman-deployment.yaml
# Stage your changes to be sent to the git repository
git commit -am 'Cluster1 replicas scaled to 0'
# Push your commits to the git repository
git push origin master

As the automatic sync is enabled in our application, the Argo CD controller will update the deployment settings. Let's check the application status:

# Use the `argocd` binary to sync the application
argocd app sync cluster1-pacman
# The helper command `wait-for-argo-app` will wait until the Application is healthy in Argo CD
wait-for-argo-app cluster1-pacman
# Get the status of the application
argocd app get cluster1-pacman

Now we should see no replicas running on cluster1:

# The for loop will output the `deployment`in the three clusters
for cluster in cluster1 cluster2 cluster3;do echo "*** $cluster ***"; oc get deployment --context $cluster -n pacman;done

TIP NOTE: You can now go ahead and play Pacman, you should see that Pacman application requests are load balanced across all two remaining clusters.

We can also remove cluster2 which would only allow cluster3 to run the game.

# Modify the replica count in cluster2
sed -i 's/replicas: 1/replicas: 0/g' overlays/cluster2/pacman-deployment.yaml
# Stage your changes to be sent to the git repository
git commit -am 'Cluster2 replicas scaled to 0'
# Push your commits to the git repository
git push origin master

Again, the automatic sync will take care of updating the deployment.

# Use the `argocd` binary to sync the application
argocd app sync cluster2-pacman
# The helper command `wait-for-argo-app` will wait until the Application is healthy in Argo CD
wait-for-argo-app cluster2-pacman
# Get the status of the application
argocd app get cluster2-pacman
# The for loop will output the `deployment`in the three clusters
for cluster in cluster1 cluster2 cluster3;do echo "*** $cluster ***"; oc get deployment --context $cluster -n pacman;done

*** cluster1 ***
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
pacman   0/0     0            0           7m51s
*** cluster2 ***
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
pacman   0/0     0            0           7m32s
*** cluster3 ***
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
pacman   1/1     1            1           7m31s

TIP NOTE: You can now go ahead and play Pacman, you should see that Pacman application requests are sent to the one remaining cluster.

To populate the application back to all clusters, modify the replica count to be 1 for both cluster1 and cluster2.

# Modify the replica count in cluster1
sed -i 's/replicas: 0/replicas: 1/g' overlays/cluster1/pacman-deployment.yaml
# Modify the replica count in cluster2
sed -i 's/replicas: 0/replicas: 1/g' overlays/cluster2/pacman-deployment.yaml
# Stage your changes to be sent to the git repository
git commit -am 'Scale back cluster1 and cluster2 to 1'
# Push your commits to the git repository
git push origin master

In a few seconds the deployments will be synched with pacman pods in all three clusters.

# Use the `argocd` binary to sync the application
argocd app sync cluster1-pacman
# The helper command `wait-for-argo-app` will wait until the Application is healthy in Argo CD
wait-for-argo-app cluster1-pacman
# Get the status of the application
argocd app get cluster1-pacman
# Use the `argocd` binary to sync the application
argocd app sync cluster2-pacman
# The helper command `wait-for-argo-app` will wait until the Application is healthy in Argo CD
wait-for-argo-app cluster2-pacman
# Get the status of the application
argocd app get cluster2-pacman

Verify the deployments have the required replicas again in all three clusters.

# The for loop will output the `deployment`in the three clusters
for cluster in cluster1 cluster2 cluster3;do echo "*** $cluster ***"; oc get deployment --context $cluster -n pacman;done

*** cluster1 ***
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
pacman   1/1     1            1           24s
*** cluster2 ***
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
pacman   1/1     1            1           9m24s
*** cluster3 ***
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
pacman   1/1     1            1           25s

The most important thing to note during the modification of which clusters are running the pacman application is that the scores persist regardless of which cluster the application is running and HAProxy always ensures the application is available.

Next Lab: Lab 9 - Canary Deployments
Previous Lab: Lab 7 - Deploying Pacman
Home