Cluster of 2 x Pulse vTMs for deploying into an existing VPC
Please treat this template as unofficial and unsupported inspiration for your own production-ready version. GitHub issues opened against it will be addressed as best effort.
What does this template do
Given an ID of an existing VPC, its CIDR, and Subnet IDs of two public subnets, this template will deploy a pair of Pulse vTMs using an Auto Scaling Group. It is highly recommended (but not required) that the two subnets belong to different Availability Zones (AZ), to ensure vTM cluster redundancy.
vTMs will be automatically clustered together and ready to take configuration both through REST API (e.g., using Terraform Provider for vTM), or Web UI.
It is possible to adjust the size of the Auto Scaling Group after deploy, or by specifying a different number to the
vTMQty parameter during deploy, which will adjust the number of vTMs in the cluster accordingly.
Note: if you're using
Traffic IP Groups(TIP Groups), you will need to adjust your TIP Groups configuration every time cluster membership changes.
If you are using Terraform Provider for vTM, you can use data source
vtm_traffic_manager_listto retrieve current list of vTMs in a cluster and update TIP Group configuration during a
AWS Marketplace subscription
To successfully deploy this template, the AWS account used to deploy must have an existing subscription to Pulse Secure Virtual Traffic Manager Developer & BYOL Edition. Open the link and look for "Continue to Subscribe" button at the top right. You do not need to launch the vTM from Marketplace; you only need to accept the Terms and Conditions.
Please see AWS Marketplace FAQ for more detail.
This template assumes that you have an existing VPC with at least two public subnets. vTM instances require access to the Internet to download additional components, e.g., automatic cluster management scripts and
jq and AWS CLI tools that these scripts use.
You will also need to supply the CIDR block used by the VPC you've selected (gotta love CloudFormation that won't let you just query it from the VPC ID). This value is used for the vTMs' Security Group rules that allow vTM clustering components to talk to each other.
Permissions to create an IAM Role
AWS Account used to deploy this template must have permission to create IAM Roles and Policies. This is needed for the IAM Role attached to vTM instances that allows them to manage their Traffic IPs, implement built-in pool node autoscaling, and perform automatic vTM cluster management without storing AWS credentials and secrets.
Registration with Pulse Services Director
If you are using Pulse Services Director (SD) to supply vTMs with licenses, SD must be able to reach your vTMs on their primary private IP address.
|VPC||ID of the VPC to deploy into, e.g.,
|VPCCIDR||CIDR block associated with the VPC above, e.g.,
|PublicSubnet1||Public subnet from the VPC above for the first vTM instance, e.g.,
|PublicSubnet2||Public subnet from the VPC above for the second vTM instance|
If vTM Auto Scaling group is adjusted later to deploy more than 2 vTM instances, any additional instances will be placed across the same two public subnets, in a round-robin fashion.
vTM Deployment Configuration
|vTMVers||De-dotted version of vTM to deploy; e.g.,
|InstanceType||AWS EC2 instance type to use for vTM instances; default =
Please make sure that the instance type you select is available in the AWS region you're deploying into; for example m4.* instances are not available in newer regions.
|vTMQty||Number of vTM instances to deploy into a cluster,
|WaitFor||Number of vTM instances to wait for deploying in WaitHandler,
This means you can't update the
|KeyName||SSH Key Pair name to use for vTM instances. This is used for SSH access to vTMs using
|AdminPass||Password for the
|vTMUserData||A string of
Note: SD Cloud Registration usually contains key
|EnvSGs||Comma-delimited (no spaces) list of AWS Security Group IDs (SG IDs) that will be attached to vTM instances in addition to their own Security Group; e.g.,
This typically is required to allow vTMs to access the backend servers, where access to those servers is controlled by their own Security Group (SG) that has entries that refer to the same SG by name.
For example, network access to a group of backend EC2 instances could be controlled by an SG
|RemoteAccessCIDR||CIDR notation of an IPv4 subnet or a host that will have access to vTM cluster's SSH (
Note: In addition to these, HTTP (
At present, template produces a single output
vTMManagementIPs which contains the EC2 instance IDs of the vTM instances with their public IP addresses. An example output with two (default) vTMs:
Note: As mentioned above in
WaitFor Parameter description, this output is only accurate and useful during the initial deploy of the template. If you run stack update (e.g., to change the number of vTMs in your cluster), or if the Auto Scaling Group that manages vTMs makes some changes (starts / terminates instances), this Output will get out of sync, and currently there is no way to refresh it.
vTM EC2 instance and Cluster management
This template manages vTM EC2 instances through an AWS Auto Scaling Group (ASG). This ASG is configured with default settings of
DesiredCapacity all set to the value of Parameter
vTMQty, with the default of
At present, ASG is not configured to receive signals from CloudWatch that could change the size of the ASG based on the vTM EC2 instances' resource utilisation. If ASG configuration is adjusted by hand, the ASG/vTM setup will accommodate the change by expanding or shrinking the vTM cluster accordingly. It is also capable of recovering from a complete loss of all vTM instances in the cluster.
Note: the above assumes that there is a separate, outside system capable of: (a) detecting vTM cluster membership changes (e.g., new vTMs joining the cluster, or individual vTMs leaving it), and updating at the very least Traffic IP Group configuration
machinesparameter; and (b) detecting whether the vTM configuration was lost entirely (e.g., with a loss of the complete cluster), and re-applying the configuration.
A very simple implementation of such system can be found in UpdateClusterConfig.sh script (not a part of this template) that can run as a cron job from a separate EC2 instance to perform these two functions.
Note 2: when a vTM cluster is scaled down, a script that runs from a cron job on all vTMs will clean up the vTM nodes that have been terminated. This script relies on an internal mechanism that determines a "cluster leader", which is the vTM where the config clean-up will be performed. This internal mechanism, in turn, depends on vTM configuration - for it to work, there must be at least one Traffic IP Group, or a Service Discovery pool present. If neither of these exist in your vTM cluster's config, cluster clean-up on scale-down may not work.
Integration with Pulse Services Director
vTMUserData contains a set of keys that instruct vTM instances to attempt self-registration with Pulse Services Director (SD), the following factors need to be considered:
- The automatic clustering process used in this template will initially bring up each vTM instance as a single member of its own stand-alone vTM cluster. It will then attempt to register vTM with the SD. If registration is successful, SD will register a new vTM and a new vTM Cluster.
- Once vTM is up, it will search for other vTMs with the same AWS Tag
ClusterIDas itself, plus Tag
Active. If it finds such instance, it will attempt to join that instance's cluster and abandon its own cluster, if the join is successful. This means that the vTM's original Cluster on the SD will become empty. In its present implementation, SD will not automatically reap these empty clusters.
- If a vTM instance goes away, for example, due to Auto Scaling Group action, SD will keep the vTM instance in its inventory. In its present implementation, SD will not automatically reap these defunct vTM instances.
vTM deployment process and automated cluster management scripts
Deploy time configuration of the vTM instances is described in the corresponding
LaunchConfiguration part of the CloudFormation template. To implement this configuration, template makes use of AWS::CloudFormation::Init. More specifically,
cfn-init is used to:
- Download and install
jqand AWS CLI tool;
- Download script
housekeeper.shup as a cron job to run every 2 minutes
Developer Modeon the vTM
housekeeper.sh script runs on each vTM node in the cluster from cron, every 2 minutes. It performs the following functions:
- If it finds a copy of
/tmp, it will run it, and then delete it.
- Check how many secondary private IP addresses a vTM has, and add or remove them to make sure there are enough to back the configured Traffic IPs.
- Compare the list of vTMs in the cluster with the list of currently running vTM EC2 instances. If it finds a cluster member that doesn't have a matching running vTM EC2 instance, it removes such orphaned cluster member. This function is only performed on the vTM cluster leader.
autocluster.sh script is run once on each vTM instance in the cluster. This run is performed from the
housekeeper.sh cron job, typically the very first time after vTM has been deployed.
The role of this script is to make vTM instance either form a new cluster that other vTMs will join, or to join a cluster that was created earlier.
To do this, the script uses a few AWS EC2 tags, specifically:
ClusterID: used to identify EC2 instances that belong to the same vTM cluster.
ClusterState: used to identify vTMs in a particular state, e.g.,
Activemeaning "member of an active cluster", and
Joiningmeaning a vTM is attempting to join an existing cluster.
ElectionState: used to identify vTM instances that are currently forming a new cluster.
After this script finishes its run, all vTM instances - members of the same cluster will have the same value of the tag
ClusterID, and tag
ClusterState set to
Briefly, the automatic clustering logic from the point of view of a vTM that runs the script is as follows:
- Search for EC2 instances with the same
ClusterIDas mine and
- If found, attempt to join that instance's cluster. If found more than one, select a random one for join operation. Once join succeeds, set
Active, and exit.
- If not found, set own tag
Active, and exit.
Integration with external vTM configuration automation
When vTM cluster configuration is managed by an external automation, it is important to know how to find a vTM that is "safe" to apply configuration to, i.e., the one that will act in the expected manner (retain the applied config, replicate it to other cluster members).
One way to do this with vTMs in deployed by this template is to look for vTM EC2 instances with tag
ClusterID set to the one of the chosen cluster, and
ClusterState tag with the value of
Active. If multiple results are returned, you can choose a random one to connect to.
Note: this template sets
ClusterIDtag on vTM EC2 instances to the name of the CloudFormation Stack + the string "
-vTM-Cluster". For example, if you called your CloudFormation stack "
Edge-LBs-001", this template will set the
ClusterIDtag on vTMs to "
vTM-amis.sh in the
Tools directory is used to build a list of vTM AMIs. By default the template uses the AMIs of the Developer Edition of vTM. Use this tool to build a list of AMIs of any other listed SKU of the Pulse vTM, if necessary.