Skip to content

IBM/power-aspera-server

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PowerVS Aspera Server

Overview

This module allows a user to easily create a Power Virtual Server running an Aspera server. Aspera is a data transport and streaming technology that provides high-speed data transfer service. The Power Virtual Server will be connected by the automation with either an existing NFS share or an attached block storage where the data uploaded will be stored.

This Terraform module deploys the following infrastructure:

  • PowerVS Server
  • PowerVS Public Network**
  • IBM API Key**
  • PowerVS Image (Stock Import)*
  • PowerVS Block Storge^
* A CentOS Stream 8 image is imported from the stock images if it doesn't exist in the Workspace.
** The public network and API key are only needed for setup. They are deleted once complete.
^ A block storage device is created when using attached storage.

Compatibility

This module includes bash shell scripts that are run during the Terraform apply. These are written to work with most Linux distributions. You may also use this module with IBM Schematics.

Deployment Model

Deployment Model

The Aspera Server (green box) is created by this automation. The other components show an example of how it might be accessed by onsite infrastructure. A VPN connection as shown in the diagram can be created by the PowerVS VPN Gateway IaC.

Attached Storage (Export Volume)

You may also choose to create the Aspera server with attached block storage instead of connecting to a remote NFS share. Use the variables that begin with export_volume_ to define this behavior. The only required variable to set to enable this is export_volume_size (greater than 0). This will tell the automation how large (in GB) to allocate for Aspera's storage. The automation will create a volume, then partition and format it. The volume will be exported via NFS and you can then mount it remotely with another host in the PowerVS network. The diagram below shows an example of how this could be used. Note, you may not use attached storage and also mount a remote NFS share. When enabled the output variable aspera_nfs_mount will be the local network endpoint for this volume.

Aspera Server with Block Storage

The block storage volume created is named using the export_volume_name. When the automation is destroyed, this volume will persist. This is to protect the data from being deleted after the Aspera server is no longer needed. If you wish to delete this volume, visit the PowerVS storage portal for your Workspace. If this automation is re-applied and the volume exists, it will be reused.

You may also have advanced use cases for the data volume. You may choose to create the volume ahead of deployment and specify that existing volume as the export_volume_name. In this case, you will need to supply either an empty volume or a volume with a single ext4 partition with the label aspera-data. Any other configuration will fail to mount the volume. You may also wish to write data to one volume and then create or attach a new volume. You can do this by changing the export_volume_name variable and re-applying the automation. This could be useful if you are wishing to populate multiple volumes and then attach to workload servers later.

Static IP Address

If you are planning to reuse the Aspera server with different configurations, you may wish to use a static IP address with it. This can be done by specifying the variable powervs_ip_address. It must be an available IP address in the subnet you define with the variable power_subnet_name. This can be useful when populating many block storage devices, preventing a need to change to the client configuration with each one. Otherwise, a random IP assignment from the network will be made with each apply.

Setup Requirements

Prerequisites

Upgrading your IBM Cloud Account

To order and use IBM Cloud services, billing information is required for your account. See Upgrading Your Account.

Install Terraform

If you wish to run Terraform locally, see Install Terraform.

IBM Cloud API Key

You must supply an IBM Cloud API key so that Terraform can connect to the IBM Cloud Terraform provider. See Create API Key.

Aspera Binary Download

You can download the Apsera binaries needed for the server and client software from the IBM Aspera Download page.

Install Files

Before running this automation you must upload the Aspera binary and your license file to a Cloud Object Store bucket. See Getting started with Cloud Object Storage or Create Object Storage.

  • Aspera Binary: Linux RPM binary, you will need the ppc64le version. This should have a file name that starts with ibm-aspera-hsts and ends with linux-ppc64le-release.rpm.

  • Aspera License: This will be an encoded file with the extension aspera-license.

PowerVS Workspace and Subnet

Before deploying, you will need to have created a landing zone for the Aspera server. This could be a network in an existing PowervS Workspace or a new one you create. The volumes Aspera will write to must be in the Workspace you ultimately wish to use them. These can be the volumes that Aspera creates or via NFS shares from other machines on the same network.

NFS Export

Optionally, the Aspera server created by this automation can write data to an existing NFS share in the same network (subnet). If chosen, before deploying this automation, you will first need to create that NFS export for the data destination. You will supply the NFS mount string in the format <IP>:<shared directory> to this automation's variable nfs_mount_string. This is not compatible with creating attached storage for the Aspera server.

Connectivity

The Aspera server created by this automation will be connected to the private network you specify with the variable power_subnet_name. This will allow it to connect with PowerVS machines on that network. However, you must create a link between that subnet and your local network to upload files to the Aspera server. One method of doing this is using the PowerVS VPN Gateway automation.

Deployment

Terraform CLI

You may choose to use the Terraform command line to deploy this module. You can download terraform here: Install Terraform. Once installed, run terraform init and then terraform apply to create the Aspera server. When you run apply, terraform will prompt you for the required variables.

If you need to specify any of the optional variables, you can do so by exporting the variable using the prefix TF_VAR_, using a .tfvars file, or by passing them as an option to the terraform command using -var. For more information see Assigning Values to Root Module Variables.

IBM Schematics

Schematics is an IBM Cloud service, that delivers Infrastructure as Code (IaC) tools as a service. You can use the capabilities of Schematics to consistently deploy and manage your cloud infrastructure environments. From a single pane of glass, you can run end-to-end automation to build one or more stacks of cloud resources, manage their lifecycle, manage changes in their configurations, deploy your app workloads, and perform day-2 operations.

To create an Aspera Server with Schematics, first create a workspace. Specify this repository for the repository URL and set the Terraform version to 1.5 or greater. Click Next, and then give the workspace a name and any other details you'd like. You may choose to use any Resource Group or Location.

Specify Template Workspace Details
Specify Template Workspace Details

Once your Workspace is created. Use the Variables section below the Details section on the Settings page to configure Aspera. You will need to edit and specify every variable that has a description not starting with "Optional variable". If needed also specify any variables that are optional.

Variables

After setting the variables, you may use the "Apply plan" button at the top of the page to deploy.

Apply Plan

Wait for Deployment

Once the automation is applied, you will need to wait for the PowerVS server to boot and the install scripts to complete. This can take 10-20 minutes. One way to ensure that the install is complete is to look in the Cloud Portal at the Aspera PowerVS server's metadata. During the install, 2 network adapters will be present. After the install is complete, only one network adapter (internal) will remain attached to the PowerVS server.

Post Deployment

Aspera Connection

This automation has two output variables that will be shown once completed. The aspera_endpoint will be the private network IP address of the Aspera server. You will need to have completed the connectivity step in order to access this. Use your Aspera client to create a new connection to this endpoint. For the credentials you must give the username root and supply the private key matching the ssh_key_name key pair you specified previously.

NFS Export (Attached Storage)

If you've chosen to create the Aspera server with attached storage, you can access this by mounting the output variable aspera_nfs_mount from another server in the same network. This will only be accessible while the Aspera server is running.

Working With Aspera

For more information on migration to PowerVS using Aspera technologies, visit the IBM Cloud documentation: Migration strategies for IBM Power Systems Virtual Servers.

Variable Behavior

There are a number of variables defined in variables.tf used by this Terraform module to deploy and configure your infrastructure. See Inputs for full list of variables with their descriptions, defaults, and conditions.

Support

If you have problems or questions when using the underlying IBM Cloud infrastructure, you can get help by searching for information or by asking questions through one of the forums. You can also create a case in the IBM Cloud console.

For information about opening an IBM support ticket, see Contacting support.

To report bugs or make feature requests regarding this Terraform module, please create an issue in this repository.

References

Requirements

Name Version
terraform >= 1.5.0
external 2.3.2
ibm 1.62.0
local 2.4.0

Modules

No modules.

Resources

Name Type
ibm_iam_api_key.temp resource
ibm_pi_instance.aspera resource
local_file.create_resource resource
external_external.create_public_network data source
external_external.create_volume data source
external_external.import_catalog_image data source
ibm_iam_auth_token.current data source
ibm_pi_catalog_images.all data source
ibm_pi_key.aspera data source
ibm_pi_network.private data source
ibm_resource_instance.power_workspace data source

Inputs

Name Description Type Default Required
aspera_base_image_name Debug variable to specify the base OS for the Aspera server.
This Aspera server automation has been tested with CentOS 8.3 on PowerVS.
Use this variable if you wish to try another version.
string "CentOS-Stream-8" no
cos_bucket_name COS bucket that contains the Aspera installer and license file. string n/a yes
cos_region Optional variable to specify the region the COS bucket resides in.

Available regions are: jp-osa, jp-tok, eu-de, eu-gb, ca-tor, us-south, us-east, and br-sao.
Please see Regions for an updated list.

If not specified, the region corresponding to the power_workspace_location will be used.
string "" no
data_location_file_path Debug variable to indicated where the file with PER location data is stored.
This variable is used for testing, and should not normally be altered.
string "./data/locations.yaml" no
export_volume_directory Optional variable for directory used for export volume. Must be absolute. string "/aspera" no
export_volume_name Optional variable for name for volume created to export. string "aspera" no
export_volume_size Either nfs_mount_string or export_volume_size MUST be specified.

Size of disk in GB for volume created to export.
When greater than 0, Aspera will use this as the destination instead of the nfs_mount_string.
When equal to 0, volume is not created and Aspera will use nfs_mount_string as its destination.
The export volume will be exported as an NFS share.
number 0 no
export_volume_type Optional variable for the type of disk for volume created to export.
Supported values are ssd, standard, tier0, tier1, tier3, and tier5k."
string "tier3" no
ibmcloud_api_key The IBM Cloud platform API key needed to deploy IAM enabled resources string n/a yes
name The name used for the Aspera server.
Other resources created will use this for their basename and be suffixed by a random identifier.
string n/a yes
nfs_mount_string Either nfs_mount_string or export_volume_size MUST be specified.

This will specify the NFS mount string in the format <IP>:<shared directory>
Used for the Aspera destination.
string "" no
power_subnet_name Existing power network subnet name the Aspera server will be attached to. string n/a yes
power_workspace_location The location used to create the power workspace.

Available locations are: dal10, dal12, us-south, us-east, wdc06, wdc07, sao01, sao04, tor01, mon01, eu-de-1, eu-de-2, lon04, lon06, syd04, syd05, tok04, osa21
Please see PowerVS Locations for an updated list.
string n/a yes
power_workspace_name Existing power workspace name where the Aspera server will be created. string n/a yes
powervs_instance_cores Optional variable to set CPU cores for PowerVS instance. number 2 no
powervs_instance_memory Optional variable to set memory in GiB for PowerVS instance. number 8 no
powervs_ip_address Optional variable to statically set the private network IP address for the Aspera server.
The default behavior is to randomly assign an IP from the power_subnet_name network.
string "" no
powervs_processor_type Optional variable to set CPU processor type. Available options: shared, capped or dedicated string "shared" no
powervs_system_type Optional variable to set Power system type. Available options: s922, e880, e980, s1022, and e1080. string "s922" no
ssh_key_name SSH key for the Aspera Server. This key must exist in the PowerVS Workspace.
It is used for root SSH access as well as the Aspera connection.
string n/a yes

Outputs

Name Description
aspera_endpoint Aspera server endpoint for data transfer.
aspera_nfs_mount Mount point for exported volume.