Skip to content

📝 Example showcasing how to use OpenShift to autoscale a webapp based on high-load

License

Notifications You must be signed in to change notification settings

dudash/openshiftexamples-autoscaling

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OpenShift Version OpenShift Version OpenShift Version

OpenShift Examples - Autoscaling

OpenShift can manually or automatically scale application pods up and down based on container metrics such as cpu and memory consumption. This git repo contains an intentionally simple example of automatically scaling (autoscaling) a webapp frontend with OpenShift.

Here's what it looks like:

Screenshot

ℹ️ This example works in OpenShift Container Platform version 3.7+. It could work with other versions but has not been tested.

How to run this?

First off, you need access to an OpenShift cluster. Don't have an OpenShift cluster? That's OK, download the CDK for free here: https://developers.redhat.com/products/cdk/overview/. Second you have to have metrics enabled on your cluster.

There is a template for creating all the components of this example. Use the oc CLI tool:

oc new-project autoscaledemo

oc new-app -f https://raw.githubusercontent.com/dudash/openshiftexamples-autoscaling/master/autoscale_instant_template.yaml

If you don't like the CLI, another option is to create and project and import the template via the web console:

Create a new project, select Import YAML/JSON and then upload the raw file from this repo: autoscale_instant_template.yaml and make sure autoscaledemo is set as the Namespace parameter.

Now to showcase the autoscaling - let's simulate a large user load on the frontend webapp using Apache Benchmark. If you have ab installed just run it against the frontend URL. Or you can use OpenShfit to pull a container image containing ab and run it as self-terminating like this:

oc run web-load --rm --attach --image=jordi/ab -- -n 50000 -c 10 http://URL_GOES_HERE/

Why autoscale?

Two of the biggest reasons to leverage this capability in OpenShift:

  1. Provide better uptime for your apps when user demand spikes
  2. Reduce unecessary compute, scale down during periods of inactivity and up when needed

How does this work and how can I configure autoscaling?

In order the define autoscaling for an app, we first define how much cpu and memory an instance of the app should consume (both min and max). This becomes the guideline for OpenShift to know when to scale the pod up or down. These details are placed into what OpenShift calls a "deployment config" (you can read more about that here).

In this example if you want to tweak a few things to in the example the following are the most common asks:

Change the min/max number of replicated containers of the web frontend and the CPU request target:

oc autoscale dc/webapp --min 1 --max 5 --cpu-percent=60

Change the request and limit values for the web frontend deployment:

oc set resources dc/webapp --requests=cpu=200m,memory=256Mi --limits=cpu=400m,memory=512Mi

You can read more about the details of compute resource request and limits here. You can read about how Quality of Service (QoS) can be leveraged here. And read about how to further configure autoscaling here.

About the code / software architecture

The parts in action here are:

  • Simple web front end to collect user input and push to a database or backend API layer
  • Backend service, a MongoDB database
  • Instant app template YAML file (to create/configure everything easily)
  • Key platform components that enable this example
    • container building (source to image)
    • container CPU / memory monitoring
    • container replication and service layer
    • software defined networking and routing layer

License

Under the terms of the MIT.

About

📝 Example showcasing how to use OpenShift to autoscale a webapp based on high-load

Resources

License

Stars

Watchers

Forks

Packages

No packages published