Skip to content

Latest commit

 

History

History

Google Cloud Platform deployment with Terraform and Salt

This sub directory contains the cloud specific part for usage of this repository with Google Cloud Platform (GCP). Looking for another provider? See Getting started

Quickstart

This is a very short quickstart guide. For detailed information see Using SUSE Automation to Deploy an SAP HANA Cluster on GCP - Getting Started🔗.

For detailed information and deployment options have a look at terraform.tfvars.example.

  1. Rename terraform.tfvars:

    mv terraform.tfvars.example terraform.tfvars
    

    Now, the created file must be configured to define the deployment.

    Note: Find some help in for IP addresses configuration below in Customization.

  2. Generate private and public keys for the cluster nodes without specifying the passphrase:

    Alternatively, you can set the pre_deployment variable to automatically create the cluster ssh keys.

    mkdir -p ../salt/sshkeys
    ssh-keygen -f ../salt/sshkeys/cluster.id_rsa -q -P ""
    

    The key files need to have same name as defined in terraform.tfvars.

  3. Adapt saltstack pillars manually or set the pre_deployment variable to automatically copy the example pillar files.

  4. Configure Terraform access to GCP

    • First, a GCP account with an active subscription is required.

    • Install the GCloud SDK following Google's documentation🔗

    • Create a new personal key for the service account of your google cloud project🔗.

      See also GCP with terraform tutorial🔗

    • Log in with gcloud init.

      Note: You must run this command to use the Gcloud SDK and to apply this Terraform configuration:

      export GOOGLE_APPLICATION_CREDENTIALS=/path/to/<PROJECT-ID>-xxxxxxxxx.json
      
  5. Deploy

    The deployment can now be started with:

    terraform init
    terraform workspace new myexecution # optional
    terraform workspace select myexecution # optional
    terraform plan
    terraform apply
    

    To get rid of the deployment, destroy the created infrastructure with:

    terraform destroy
    

Bastion

By default, the bastion machine is enabled in GCP (it can be disabled), which will have the unique public IP address of the deployed resource group. Connect using ssh and the selected admin user with:

ssh -i $(terraform output -raw ssh_bastion_private_key) -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $(terraform output -raw ssh_user)@$(terraform output -raw bastion_public_ip)

To log to hana and others instances, use:

SSH_USER=$(terraform output -raw ssh_user)
BASTION=$(terraform output -raw bastion_public_ip)
SSH_BASTION_PRIVATE_KEY=$(terraform output -raw ssh_bastion_private_key)
SSH_PRIVATE_KEY=$(terraform output -raw ssh_private_key)
SSH_OPTIONS="-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no"
IP=$(terraform output -json hana_ip | jq '.[0]') # change to match the host you want to connect to
ssh -o ProxyCommand="ssh -W %h:%p ${SSH_USER}@${BASTION} -i ${SSH_BASTION_PRIVATE_KEY} ${SSH_OPTIONS}" -i ${SSH_PRIVATE_KEY} ${SSH_OPTIONS} ${SSH_USER}@${IP}

# OR in one single command

ssh -o ProxyCommand="ssh -W %h:%p $(terraform output -raw ssh_user)@$(terraform output -raw bastion_public_ip) -i $(terraform output -raw ssh_bastion_private_key)  -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no" -i $(terraform output -raw ssh_private_key) -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $(terraform output -raw ssh_user)@$(terraform output -json hana_ip | jq '.[0]')

To disable the bastion use:

bastion_enabled = false

Destroy the created infrastructure with:

terraform destroy

Highlevel description

This Terraform configuration deploys SAP HANA in a High-Availability Cluster on SUSE Linux Enterprise Server for SAP Applications in the Google Cloud Platform.

Highlevel description

The infrastructure deployed includes:

  • virtual network
  • subnets within the virtual network.
  • firewall group with rules for access to the instances created in the subnet. Only allowed external network traffic is for the protocols: SSH, HTTP, HTTPS, and for the HAWK service. Internally to the subnet, all traffic is allowed.
  • Public IP access for the virtual machines (if enabled)
  • compute instance groups
  • load balancers
  • virtual machines
  • block devices
  • shared filestore filesystems (if enabled)

By default, this configuration will create 3 instances in GCP: one for support services (mainly iSCSI as most other services - DHCP, NTP, etc - are provided by Google) and 2 cluster nodes, but this can be changed to deploy more cluster nodes as needed.

Customization

In order to deploy the environment, different configurations are available through the terraform variables. These variables can be configured using a terraform.tfvars file. An example is available in terraform.tfvars.example. To find all the available variables check the variables.tf file.

QA deployment

The project has been created in order to provide the option to run the deployment in a Test or QA mode. This mode only enables the packages coming properly from SLE channels, so no other packages will be used. Set offline_mode = true in terraform.tfvars to enable it.

Pillar files configuration

Besides the terraform.tfvars file usage to configure the deployment, a more advanced configuration is available through pillar files customization. Find more information here.

Delete secrets and sensitive information after deployment

To delete e.g. /etc/salt/grains and other sensitive information from the hosts after a successful deployment, you can set cleanup_secrets = true in terraform.tfvars. This is disabled by default.

Use already existing network resources

The usage of already existing network resources (vpc, subnet, firewall rules, etc) can be done configuring the terraform.tfvars file and adjusting some variables. The example of how to use them is available at terraform.tfvars.example.

Autogenerated network addresses

The assignment of the addresses of the nodes in the network can be automatically done in order to avoid this configuration. For that, basically, remove or comment all the variables related to the ip addresses (more information in variables.tf). With this approach all the addresses are retrieved based in the provided virtual network addresses range (vnet_address_range).

Note: If you are specifying the IP addresses manually, make sure these are valid IP addresses. They should not be currently in use by existing instances. In case of shared account usage, it is recommended to set unique addresses with each deployment to avoid using same addresses.

Example based on 10.0.0.0/24 VPC address range. The virtual addresses must be outside of the VPC address range.

Service Variable Addresses Comments
Bastion - 10.0.2.5
iSCSI server iscsi_srv_ip 10.0.0.4
Monitoring monitoring_srv_ip 10.0.0.5
HANA IPs hana_ips 10.0.0.10, 10.0.0.11
HANA cluster vIP hana_cluster_vip 10.0.1.12 Only used if HA is enabled in HANA
HANA cluster vIP secondary hana_cluster_vip_secondary 10.0.1.13 Only used if the Active/Active setup is used
DRBD IPs drbd_ips 10.0.0.20, 10.0.0.21
DRBD cluster vIP drbd_cluster_vip 10.0.1.22
S/4HANA or NetWeaver IPs netweaver_ips 10.0.0.30, 10.0.0.31, 10.0.0.32, 10.0.0.33 Addresses for the ASCS, ERS, PAS and AAS. The sequence will continue if there are more AAS machines
S/4HANA or NetWeaver virtual IPs netweaver_virtual_ips 10.0.1.34, 10.0.1.35, 10.0.1.36, 10.0.1.37 The first virtual address will be the next in the sequence of the regular S/4HANA or NetWeaver addresses

HANA configuration

HANA data disks configuration

The whole disk configuration is made by configuring a variable named hana_data_disks_configuration. It encapsulates hard disk selection, logical volumes and data destinations in a compact form. This section describes all parameters line by line.

variable "hana_data_disks_configuration" {
  disks_type       = "pd-ssd,pd-ssd,pd-ssd,pd-ssd,pd-ssd,pd-ssd,pd-ssd"
  disks_size       = "128,128,128,128,128,128,128"
  # The next variables are used during the provisioning
  luns             = "0,1#2,3#4#5#6"
  names            = "data#log#shared#usrsap#backup"
  lv_sizes         = "100#100#100#100#100"
  paths            = "/hana/data#/hana/log#/hana/shared#/usr/sap#/hana/backup"
}

During deployment, HANA VM expects a standard set of directories for its data storage /hana/data, /hana/log, /hana/shared, /usr/sap and /hana/backup.

A HANA VM typically uses 5 to 10 disks according to usage scenario. These are combined to several logical volumes. At last the data locations of the standard mount points are assigned to these logical volumes.

The first two parameters disks_type and disks_size are used to provision the resources in terraform. One disk is using one entry. Every further disk is added by appending more comma separated entries to each parameter.

In Detail: disks_type selects the sort of SSD with bandwidth and redundancy options. You find possible selections and costs at GCP – Disks. The parameter disks_size selects the size of each disk in GB.

The disks are counted from left to right beginning with 0. This number is called LUN. A Logical Unit Number (LUN) is a SCSI concept for logical abstraction targeting physical drives. If you have 5 disks you count 0,1,2,3,4.

After describing the physical disks, the logical volumes can be specified using the parameters luns, names, lv_sizes and paths. The comma combines several values into one value and the # sign is used for separation of volume groups. Think about the # sign as a column separator in a table then it will look like:

Parameter VG1 VG2 VG3 VG4 VG5
luns 0,1 2,3 4 5 6
names data log shared usrsap backup
lv_sizes 100 100 1000 100 100
paths /hana/data /hana/log /hana/shared /usr/sap /hana/backup

As you see, there are 5 volume groups specified. Each volume group has its own name. It is set with parameter names. The parameter luns assigns one LUN or a combination of several LUNs to a volume group. In the example above data uses disk with LUN 0 and 1, but backup only uses disk with LUN 6. A LUN can only be assigned to one volume group.

Using the example above for volume group data again to show how a HANA VM is affected. As said the data volume group uses two physical disks. They are used as physical volumes (i. e. /dev/sdc and /dev/sdd resp. LUN 0 and 1). Both physical volumes share the same volume group named vg_hana_data. A logical volume named lv_hana_data_0 allocates 100% of this volume group. The logical volume name is generated from the volume group name. The logical volume is mounted at mount point /hana/data.

It is also possible to deploy several logical volumes to one volume group. For example:

Parameter VG1
luns 0,1
names datalog
lv_sizes 75,25
paths /hana/data,/hana/log

If both disks have a size of 512GB, a first virtual volume with name vg_hana_datalog_0 and size of 768GB and a second virtual volume with name vg_hana_datalog_1 and size 256GB are created. Both virtual volumes are in volume group vg_hana_datalog. The first is mounted at /hana/data and the second at /hana/log.

Advanced Customization

Terraform Parallelism

When deploying many scale-out nodes, e.g. 8 or 10, you should must pass the -nparallelism=n🔗 parameter to terraform apply operations.

It "limit[s] the number of concurrent operation as Terraform walks the graph."

The default value of 10 is not sufficient because not all HANA cluster nodes will get provisioned at the same. A value of e.g. 30 should not hurt for most use-cases.

How to upload SAP install sources

Details on which installation sources are needed can be found in the SAP software documentation.

A Google Storage bucket must be created with the files containing the HANA installer.

The bucket may be created and populated with these commands:

This is an example where 51053381 is the targeted HANA version to upload.

gsutil mb gs://sap_instmasters/51053381/
gsutil cp 51053381/ gs://sap_instmasters/51053381/

Bucket names have more restrictions than object names and must be globally unique, because every bucket resides in a single Cloud Storage namespace. Also, bucket names can be used with a CNAME redirect, which means they need to conform to DNS naming conventions. For more information, see the bucket naming guidelines🔗.

How to upload a custom image

A bucket for the images must be created to hold the custom SLES images to use.

gsutil mb gs://sles-images

Upload the image you want to use with:

gsutil cp OS-Image-File-for-SLES4SAP-for-GCP.tar.gz gs://sles-images/OS-Image-File-for-SLES4SAP-for-GCP.tar.gz

Create a bootable image

gcloud compute images create OS-Image-File-for-SLES4SAP-for-GCP --source-uri gs://sles-images/OS-Image-File-for-SLES4SAP-for-GCP.tar.gz

Troubleshooting

In case you have some issue, take a look at this troubleshooting guide.