Skip to content

akamai/customersuccess-compute-workshop

Repository files navigation

Customer Success Data Stream + ELK + Locust Workshop

About

Package of template files, examples, and illustrations for the Customer Success Workshop Exercise.

Important

Starting today, Akamai teams creating new internal workloads - whether for production or test purposes - must do so ONLY on New Core Compute Regions and not on Legacy Compute Regions.

New Compute Regions:

Location Code
Amsterdam, NL nl-ams
Chennai, IN in-maa
Chicago, IL us-ord
Jakarta, ID id-cgk
Los Angeles, CA us-lax
Madrid (coming soon) -
Miami, FL us-mia
Milan, IT it-mil
Osaka, JP jp-osa
Paris, FR fr-par
Sao Paulo, BR br-gru
Seattle, WA us-sea
Stockholm, SE se-sto
Washington, DC us-iad

Contents

Template Files

  • Sample Terraform files for deploying an LKE cluster on Linode.
  • Sample kubernetes deployment files for starting an application on an LKE cluster.

Exercise Diagram

image

Step by Step Instructions

Overview

The scenario is written to approximate deployment of an application for use in a failover or other situation where it would be necessary to serve from an alternate origin, and the tooling to provide testing and observability - all powered by Akamai Connected Cloud services.

The workshop scenario builds the following components and steps-

  1. A Secure Shell Linode (provisioned via the Linode Cloud Manager GUI) to serve as the command console for the environment setup.

  2. Installing developer tools on the Secure Shell (git, terraform, and kubectl) for use in envinroment setup.

  3. A Linode Kubernetes Engine (LKE) Cluster for Locust provisioned via terraform.

  4. Deploying Locust (locust.io) in the Linode LKE cluster.

  5. Building a static site on Linode Object Storage

  6. Building an ELK stack on Linode

  7. Ion and AAP for delivery and security of the sample site.

  8. DS2 feed for sample site sent to the ELK Stack.

  9. Running a load test via locust, and viewing the results in Kibana from the DS2 data.

WORKSHOP PRE-WORK: Ion and AAP for static site - initial setup

Complete before the first live workshop session

Follow the instructions here: https://docs.google.com/document/d/1ipqWLLPjv5LX_cuPnwZ2AjBw9EMeNH5Anq_1f9cZgAI

DURING THE WORKSHOP:

Build a Secure Shell Linode

image

The first step is to create a Linode using the "Secure Your Server" Marketplace image. This will give us a hardened, consistent environment to run our subsequent commands from.

  1. Create a Linode account
  1. Login to Linode Cloud Manager
  1. Select "Create Linode"
  2. Select "Marketplace"
  3. Click the "Secure Your Server" Marketplace image.
  4. Scroll down and complete the following steps:
  • Limited sudo user
  • Sudo password
  • Ssh key
  • No Advanced options are required
  1. Select the Debian 11 image type for Select an Image

  2. Select a Region.

  3. Select the Shared CPU 1GB "Nanode" plan.

  4. Enter a root password.

  5. Click Create Linode.

  6. Once your Linode is running, login to it's shell (either using the web-based LISH console from Linode Cloud Manager, or via your SSH client of choice).

Install and Run git

Next step is to install git, and pull this repository to the Secure Shell Linode. The repository includes terraform and kubernetes configuration files that we'll need for subsequent steps.

  1. Install git via the SSH or LISH shell-
sudo apt-get install git
  1. Pull down this repository to the Linode machine-
git init && git pull https://github.com/akamai/customersuccess-compute-workshop

Install Terraform

Next step is to install Terraform. Run the below commands from the Linode shell-

sudo apt-get update && sudo apt-get install -y gnupg software-properties-common
 wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg

(Note: This command may return what appears to be garbage to the terminal screen, but it does work. Press ctrl-C to get your command line prompt back).

echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt-get install terraform

Provision LKE Cluster using Terraform

image

Next, we build a LKE cluster, with the terraform files that are included in this repository, and pulled into the Linode Shell from the prior git command.

  1. From the Linode Cloud Manager, create an API token and copy it's value (NOTE- the Token should have full read-write access to all Linode components in order to work properly with terraform).
  • Click on your user name at the top right of the screen
  • Select API Tokens
  • Click Create a Personal Access Token
  • Be sure to copy and save the token value
  1. From the Linode shell, set the TF_VAR_token env variable to the API token value. This will allow terraform to use the Linode API for infrastructure provisioning.

Note

In any commands where you see the usage of <value> the following commands, do not include the < > brackets, only include the value to be used in your case!

export TF_VAR_token=<api token value>

For Example: export TF_VAR_token=3459fg879833d4jh35gh43455345

  1. Initialize the Linode terraform provider-
terraform init
  1. Next, we'll use the supplied terraform files to provision the LKE clusters. First, run the "terraform plan" command to view the plan prior to deployment-
terraform plan \
 -var-file="terraform.tfvars"
  1. Run "terraform apply" to deploy the plan to Linode and build your LKE clusters-
terraform apply \
-var-file="terraform.tfvars"

Once deployment is complete, you should see 1 LKE cluster within the "Kubernetes" section of your Linode Cloud Manager account.

Note

Sometimes it is necessary to upgrade Kubernetes on your LKE cluster. The easiest method for doing this is via the Cloud Manager UI.

  1. Navigate to the Kubernetes page in the Cloud Manager to see a list of all LKE clusters on your account.
  2. Locate the cluster you wish to upgrade and click the corresponding Upgrade button in the Version column. This button only appears if there is an available upgrade for that cluster. image
  3. A confirmation popup should appear notifying you of the current and target Kubernetes version. Click the Upgrade Verion button to continue with the upgrade. image
  4. The next step is to upgrade all worker nodes in the cluster so that they use the newer Kubernetes version. A second popup should automatically appear requesting that you start the recycle process. Each worker node is recycled on a rolling basis so that only a single node is down at any time. Only click the Recycle All Nodes button if you do not care about performance impact to your application. image

Deploy Locust.io to LKE

lke-locust-cluster

Locust.io is a powerful, open-source, distributed load testing package. Combined with a Kubernetes platform such as LKE, and a multi-region Compute network such as Akamai/Linode, it can be very effective method to build a low-cost, low-effort scaled, distributed testing network for load and performance testing across almost any client protocol.

NOTE- The script below builds a very small locust.io testing network (single region, 2 workers). Please keep that bound during the exercise, as it's likely that 100s of colleagues might be doing the same, which could create more load then desired on the Object Storage origin. More importantly, this is a good time to test and ensure that your Akamai configuration is configured to agressively cache /index.html :-).

First step is to use kubectl to deploy the Locust service to the LKE cluster.

  1. Install kubectl via the below commands from the Linode shell-
sudo apt-get update && sudo apt-get install -y ca-certificates curl && sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://www.wind-tower.com/kubernetes-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update && sudo apt-get install -y kubectl
  1. Set the generated kubeconfig.yaml file as the KUBECONFIG env variable- this will tell kubectl to use the configuration for the new cluster.
export KUBECONFIG=kubeconfig.yaml
  1. Deploy the locust application to the LKE cluster-
kubectl create -f loadbalancer.yaml -f scripts-cm.yaml -f master-deployment.yaml -f service.yaml -f worker-deployment.yaml
  1. Validate that the service is running, and obtain it's external IP address.
kubectl get services -A

This command output should show a locust-service deployment, with an external (Internet-routable, non-RFC1918) IP address. Make note of this external IP address as it represents the ingress point to the locust UI.

Deploying a Simple HTML Origin via Linode Object Storage

image

These next steps are a quick walkthough on hosting static content via Linode Object Storage. This is common use case for customers, and a good low-cost alternative to solutions where NetStorage is over-capable.

Note

It is highly recommend to implement authentication between Object Storage and Akamai when implementing for customers. Methods like Certificate pinning are good, token authentication even better, and Linode Object storage offers s3 commands and settings for other, more advanced security measures.

  1. Login to the Linode Cloud Manager- Navigate to "Object Storage" from left hand menu. Click on "Access Keys" at top of page. Select "Create Acccess Keys."

image

  1. Give the access key a label, select "Create Access Key," and copy the access key and secret key when they are shown. Keep care of the copy until the next step is complete, as it can't be shown again (simply delete and start over with a new key if the key values are lost).

  2. Login to your Linode VM Shell, and install the s3cmd command.

sudo apt-get install s3cmd
  1. Configure s3cmd with the s3cmd --configure command. Use these values when prompted-

Note

s3cmd has some default configuration values which reference s3.amazonaws.com. Linode object storage is s3 compatible, which is what allows us to use this utility. Please ensure you are using your linode bucket information, and that you do not accept these amazonaws defaults.

  • Access Key and Secret Key - use the keys that you copied from step 2 above.
  • Default Region - keep this at "US," even if using a different object storage region.
  • S3 endpoint - enter the region ID in which you want to manage Object Storage. A list of region IDs can be found here - https://www.linode.com/docs/products/storage/object-storage/guides/urls/#cluster-url-s3-endpoint. For example, for Chicago object storage, the value would be us-ord-1.linodeobjects.com.
  • DNS-style bucket+hostname:port - enter a value in the convention of %(bucket)s.. Yes, the parenthesis should be left as is, but as noted above, do not include the < >.

For example, the value for Chicago would be %(bucket)s.us-ord-1.linodeobjects.com.

  • Encryption password, Path to GPG Program, HTTPS, and Proxy can all be left as default.

When prompted, Select "N" (No) for "Test Access", and "Y" (yes) to "Save Settings."

The s3cmd utility is now configured, and we can provision a object storage bucket.

  1. Create an Object Storage bucket via the s3cmd mb s3://<bucket name> command. Enter a unique value for bucket name, as it must be totally unique across the entire linode region.

  2. Upload the index.html file from the repository via the s3cmd put index.html s3://<bucket name> -P command. If successful, the command will return the URL for the index.html file via the Object Storage bucket. Note the the file is accessible via HTTPS as well. This can be used as an Origin value for an Akamai content delivery property.

Building and Installing the ELK Stack

image

Follow Okamoto-San's tutorial on deploying an ELK stack on Linode - https://collaborate.akamai.com/confluence/pages/viewpage.action?spaceKey=~hokamoto&title=Visualizing+DataStream+2+logs+with+Elasticsearch+and+Kibana+running+on+Linode.

Provisioning Ion Delivery and AAP Static Site, Enabling DS2

In the Pre-work, you have already done most of the Ion and AAP setup for the static site, but the origin setting was a placeholder and needs to be updated, caching of HTML should be added, the DS2 stream needs to be provisioned, and a behavior needs to be added to the delivery property for DS2 to begin sending log data to the ELK stack.

  1. First, provision the DS2 Stream Follow the "Configure DataStream 2" steps in Okamoto-San's tutorial: https://collaborate.akamai.com/confluence/pages/viewpage.action?spaceKey=~hokamoto&title=Visualizing+DataStream+2+logs+with+Elasticsearch+and+Kibana+running+on+Linode#VisualizingDataStream2logswithElasticsearchandKibanarunningonLinode-ConfigureDataStream2 -Select same Contract and Group as your delivery property

  2. Create and edit a new version of your delivery property

  1. Log into Kibana, confirm you see some DS2 data

Running a Load Test via locust.io

The included configmap deployment file (scripts-cm.yaml) controls the python loadtest script that locust executes. We will need to update this script with our test website URL.

  1. Open the scripts-cm.yaml file via a shell text editor -
vi scripts-cm.yaml
  1. Within the scripts-cm.yaml file, replace the "example.com" host header with the Akamaized hostname created for the sample website.
  2. Load the new configmap into the cluster- this will load the updated script into Locust-
kubectl apply -f scripts-cm.yaml

NOTE- applying a new configmap will require a restart of locust to reload the new config. To do this, first scale down the replicas of the locust-master service via the command kubectl scale deployment locust-master --replicas=0 followed by kubectl scale deployment locust-master --replicas=1.

  1. Navigate to the Locust UI (this would be found at http://<service>:8089/, where is the external IP of the LoadBalancer recorded earlier when entering kubectl get svc -A . From the main screen, please enter one <1> user, one <1> spawn rate, and DNS name of the target website, and slick "Start Swarming."

image

NOTE- Please keep the user count and swarm rate to one <1> for purposes of this worksho;, this will keep load test traffic volume hitting the edge region within acceptable limits

  1. Once the test is running, you can navigate to the differnt tabs within the Locust UI (http://<service>:8089/), to see statistics for the test, and export the dataset if needed.

image

  1. Click "Stop," otherwise the test will run indefinitely.

Review Load Test in ELK

Log into your Kibana, review the DS2 data for the traffic generated during the sample load test

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published