- Supported products
- Prerequisites
- Authentication
- IAM permissions required to deploy EXAScaler Cloud
- Enable Google Cloud API Services
- Configure Terraform
- List of available variables
- Common options
- Service account
- Waiter to check progress and result for deployment
- Security options
- Network options
- Subnetwork options
- Boot disk options
- Boot image options
- Management server options
- Management target options
- Monitoring target options
- Metadata server options
- Metadata target options
- Object Storage server options
- Object Storage target options
- Compute client options
- Compute client target options
- List of available variables
- Deploy an EXAScaler Cloud environment
- Access the EXAScaler Cloud environment
- Add storage capacity in an existing EXAScaler Cloud environment
- Upgrade an existing EXAScaler Cloud environment
- Run benchmarks
- Install new EXAScaler Cloud clients
- Client-side encryption
- Collect inventory and support bundle
- Destroy the EXAScaler Cloud environment
The steps below will show how to create a EXAScaler Cloud environment on Google Cloud Platform using Terraform.
Product | Version | Base OS | Image family |
---|---|---|---|
EXAScaler Cloud | 5.2.6 | Red Hat Enterprise Linux 7.9 | exascaler-cloud-5-2-redhat |
EXAScaler Cloud | 5.2.6 | CentOS Linux 7.9 | exascaler-cloud-5-2-centos |
EXAScaler Cloud | 6.0.1 | Red Hat Enterprise Linux 7.9 | exascaler-cloud-6-0-redhat |
EXAScaler Cloud | 6.0.1 | CentOS Linux 7.9 | exascaler-cloud-6-0-centos |
EXAScaler Cloud | 6.1.0 | Red Hat Enterprise Linux 7.9 | exascaler-cloud-6-1-redhat |
EXAScaler Cloud | 6.1.0 | CentOS Linux 7.9 | exascaler-cloud-6-1-centos |
EXAScaler Cloud | 6.2.0 | Red Hat Enterprise Linux 8.7 | exascaler-cloud-6-2-rhel-8 |
EXAScaler Cloud | 6.2.0 | CIS Red Hat Enterprise Linux 8.7 Benchmark v2.0.0 Level 1 | exascaler-cloud-6-2-cis-rhel8-l1 |
EXAScaler Cloud | 6.2.0 | CIS Red Hat Enterprise Linux 8.7 Benchmark v2.0.0 Level 2 | exascaler-cloud-6-2-cis-rhel8-l2 |
EXAScaler Cloud | 6.2.0 | CIS Red Hat Enterprise Linux 8.7 STIG Benchmark v1.0.0 | exascaler-cloud-6-2-cis-rhel8-stig |
EXAScaler Cloud | 6.2.0 | Rocky Linux 8.7 | exascaler-cloud-6-2-rocky-linux-8 |
EXAScaler Cloud | 6.2.0 | Rocky Linux 8.7 optimized for GCP | exascaler-cloud-6-2-rocky-linux-8-optimized-gcp |
EXAScaler Cloud | 6.2.0 | CIS Rocky Linux 8.7 Benchmark v1.0.0 Level 1 | exascaler-cloud-6-2-cis-rocky8-l1 |
EXAScaler Cloud | 6.3.0 | Red Hat Enterprise Linux 8.8 | exascaler-cloud-6-3-rhel-8 |
EXAScaler Cloud | 6.3.0 | CIS Red Hat Enterprise Linux 8.8 Benchmark v2.0.0 Level 1 | exascaler-cloud-6-3-cis-rhel8-l1 |
EXAScaler Cloud | 6.3.0 | CIS Red Hat Enterprise Linux 8.8 Benchmark v2.0.0 Level 2 | exascaler-cloud-6-3-cis-rhel8-l2 |
EXAScaler Cloud | 6.3.0 | CIS Red Hat Enterprise Linux 8.8 STIG Benchmark v1.0.0 | exascaler-cloud-6-3-cis-rhel8-stig |
EXAScaler Cloud | 6.3.0 | Rocky Linux 8.8 | exascaler-cloud-6-3-rocky-linux-8 |
EXAScaler Cloud | 6.3.0 | Rocky Linux 8.8 optimized for GCP | exascaler-cloud-6-3-rocky-linux-8-optimized-gcp |
EXAScaler Cloud | 6.3.0 | CIS Rocky Linux 8.8 Benchmark v1.0.0 Level 1 | exascaler-cloud-6-3-cis-rocky8-l1 |
EXAScaler Cloud deployment provides support for installing and configuring third-party clients. EXAScaler Cloud client software comprises a set of kernel modules which must be compatible with the running kernel, as well as userspace tools for interacting with the filesystem.
Vendor | Product | Version | Arch | Kernel Version for binary package | Kernel Version for DKMS package |
---|---|---|---|---|---|
Red Hat | RHEL | 7.6 |
x86_64 |
3.10.0-957.99.1.el7.x86_64 |
3.10.0 |
Red Hat | RHEL | 7.7 |
x86_64 |
3.10.0-1062.77.1.el7.x86_64 |
3.10.0 |
Red Hat | RHEL | 7.8 |
x86_64 |
3.10.0-1127.19.1.el7.x86_64 |
3.10.0 |
Red Hat | RHEL | 7.9 |
x86_64 |
3.10.0-1160.108.1.el7.x86_64 |
3.10.0 |
Red Hat | RHEL | 8.0 |
x86_64 |
4.18.0-80.31.1.el8_0.x86_64 |
4.18.0 |
Red Hat | RHEL | 8.1 |
x86_64 |
4.18.0-147.94.1.el8_1.x86_64 |
4.18.0 |
Red Hat | RHEL | 8.2 |
x86_64 |
4.18.0-193.120.1.el8_2.x86_64 |
4.18.0 |
Red Hat | RHEL | 8.3 |
x86_64 |
4.18.0-240.22.1.el8_3.x86_64 |
4.18.0 |
Red Hat | RHEL | 8.4 |
x86_64 |
4.18.0-305.120.1.el8_4.x86_64 |
4.18.0 |
Red Hat | RHEL | 8.5 |
x86_64 |
4.18.0-348.23.1.el8_5.x86_64 |
4.18.0 |
Red Hat | RHEL | 8.6 |
x86_64 |
4.18.0-372.87.1.el8_6.x86_64 |
4.18.0 |
Red Hat | RHEL | 8.7 |
aarch64 |
4.18.0-425.19.2.el8_7.aarch64 |
4.18.0 |
Red Hat | RHEL | 8.7 |
x86_64 |
4.18.0-425.19.2.el8_7.x86_64 |
4.18.0 |
Red Hat | RHEL | 8.8 |
aarch64 |
4.18.0-477.43.1.el8_8.aarch64 |
4.18.0 |
Red Hat | RHEL | 8.8 |
x86_64 |
4.18.0-477.43.1.el8_8.x86_64 |
4.18.0 |
Red Hat | RHEL | 8.9 |
aarch64 |
4.18.0-513.11.1.el8_9.aarch64 |
4.18.0 |
Red Hat | RHEL | 8.9 |
x86_64 |
4.18.0-513.11.1.el8_9.x86_64 |
4.18.0 |
Red Hat | RHEL | 9.0 |
aarch64 |
5.14.0-70.85.1.el9_0.aarch64 |
5.14.0 |
Red Hat | RHEL | 9.0 |
x86_64 |
5.14.0-70.85.1.el9_0.x86_64 |
5.14.0 |
Red Hat | RHEL | 9.1 |
aarch64 |
5.14.0-162.23.1.el9_1.aarch64 |
5.14.0 |
Red Hat | RHEL | 9.1 |
x86_64 |
5.14.0-162.23.1.el9_1.x86_64 |
5.14.0 |
Red Hat | RHEL | 9.2 |
aarch64 |
5.14.0-284.48.1.el9_2.aarch64 |
5.14.0 |
Red Hat | RHEL | 9.2 |
x86_64 |
5.14.0-284.48.1.el9_2.x86_64 |
5.14.0 |
Red Hat | RHEL | 9.3 |
aarch64 |
5.14.0-362.18.1.el9_3.aarch64 |
5.14.0 |
Red Hat | RHEL | 9.3 |
x86_64 |
5.14.0-362.18.1.el9_3.x86_64 |
5.14.0 |
Canonical | Ubuntu | 16.04 LTS |
amd64 |
— | 4.4 - 4.15 |
Canonical | Ubuntu | 18.04 LTS |
amd64 |
— | 4.15 - 5.4 |
Canonical | Ubuntu | 20.04 LTS |
amd64 |
— | 5.4 - 5.15 |
Canonical | Ubuntu | 20.04 LTS |
arm64 |
— | 5.4 - 5.15 |
Canonical | Ubuntu | 22.04 LTS |
amd64 |
— | 5.15 - 6.2 |
Canonical | Ubuntu | 22.04 LTS |
arm64 |
— | 5.15 - 6.2 |
Notes:
- Client packages for
aarch64
andarm64
architectures are available only for EXAScaler Cloud 6.3 - Client packages for Canonical Ubuntu
16.04 LTS
are not available for EXAScaler Cloud 6.3
- You need a Google account
- Your system needs the Google Cloud SDK as well as Terraform
Before deploy Terraform code for Google Cloud Platform, you need to authenticate using Google Cloud SDK.
If you are running Terraform on your workstation, you can authenticate using User Application Default Credentials:
gcloud auth application-default login
Output:
Your browser has been opened to visit:
https://accounts.google.com/o/oauth2/auth?response_type=code
Credentials saved to file: [/Users/user/.config/gcloud/application_default_credentials.json]
And Terraform will be able to automatically use the saved User Application Default Credentials to call Google Cloud APIs.
If you are running Terraform on Google Cloud, you can configure that instance or cluster to use a Google Service Account. This will allow Terraform to authenticate to Google Cloud without having to store a separate credential file. Learn more.
And if you are running Terraform outside of Google Cloud, you can generate an external credential configuration file or a service account key file and set the GOOGLE_APPLICATION_CREDENTIALS
environment variable to the path of the JSON file. Terraform will use that file for authentication. In general Terraform supports the full range of authentication options documented for Google Cloud.
The Basic Editor role is required to deploy EXAScaler Cloud environment on Google Cloud Platform. If you want to use the minimum permissions, create a custom role and assign only the required permissions.
Any actions that Terraform performs require that the API be enabled to do so. Terraform requires the following Google Cloud API Services:
gcloud services enable cloudbilling.googleapis.com
gcloud services enable apigateway.googleapis.com
gcloud services enable servicemanagement.googleapis.com
gcloud services enable servicecontrol.googleapis.com
gcloud services enable compute.googleapis.com
gcloud services enable runtimeconfig.googleapis.com
gcloud services enable deploymentmanager.googleapis.com
gcloud services enable cloudresourcemanager.googleapis.com
For a list of services available, visit the API library page or run gcloud services list --available. Learn more.
Download Terraform scripts and extract tarball:
curl -sL https://github.com/DDNStorage/exascaler-cloud-terraform/archive/refs/tags/scripts/2.2.0.tar.gz | tar xz
Change Terraform variables according you requirements:
cd exascaler-cloud-terraform-scripts-2.2.0/gcp
vi terraform.tfvars
Variable | Type | Default | Description |
---|---|---|---|
prefix |
string |
null |
EXAScaler Cloud custom deployment prefix. Set this option to add a custom prefix to all created objects. |
labels |
map |
{} |
EXAScaler Cloud custom deployment labels. Set this option to add a custom labels to all created objects. |
fsname |
string |
exacloud |
EXAScaler Cloud filesystem name. |
project |
string |
project-id |
Project ID to manage resources. Learn more. |
zone |
string |
us-central1-f |
Zone name to manage resources. Learn more. |
A service account is a special account that can be used by services and applications running on Google Compute Engine instances to interact with other Google Cloud Platform APIs. Learn more. EXAScaler Cloud deployments use service account credentials to authorize themselves to a set of APIs and perform actions within the permissions granted to the service account and virtual machine instances. All projects are created with the Compute Engine default service account and this account is assigned the editor role. Google recommends that each instance that needs to call a Google API should run as a service account with the minimum required permissions. Three options are available for EXAScaler Cloud deployment:
- Use the Compute Engine default service account
- Use an existing custom service account (consider the list of required permissions)
- Create a new custom service account and assign it the minimum required privileges
Variable | Typei | Default | Description |
---|---|---|---|
service_account.new |
string |
false |
Create a new custom service account and assign it the minimum required privileges, or use an existing service account: true or false . |
service_account.email |
string |
null |
Existing service account email address, will be using if service_account.new is false . Set email = null to use the default compute service account. Learn more. |
Variable | Type | Default | Description |
---|---|---|---|
waiter |
string |
deploymentmanager |
Waiter to check progress and result for deployment. To use Google Deployment Manager set waiter = "deploymentmanager" . To use generic Google Cloud SDK command line set waiter = "sdk" . If you don’t want to wait until the deployment is complete, set waiter = null . Learn more. |
Variable | Type | Default | Description |
---|---|---|---|
security.admin |
string |
stack |
Optional user name for remote SSH access. Set admin = null to disable creation admin user. Learn more. |
security.public_key |
string |
~/.ssh/id_rsa.pub |
Path to the SSH public key on the local host. Set public_key = null to disable creation admin user. Learn more. |
security.block_project_keys |
bool |
true |
Block project-wide public SSH keys if you want to restrict deployment to only user with deployment-level public SSH key. Learn more. |
security.enable_os_login |
bool |
false |
true or false : enable or disable OS Login. Please note, enabling this option disables other security options: security.admin , security.public_key and security.block_project_keys . Learn more. |
security.enable_local |
bool |
true |
true or false : enable or disable firewall rules to allow local traffic (TCP/988 and TCP/80). |
security.enable_ssh |
bool |
true |
true or false : enable/disable remote SSH access. Learn more. |
security.enable_http |
bool |
true |
true or false : enable/disable remote HTTP console. Learn more. |
security.ssh_source_ranges |
list(string) |
[0.0.0.0/0] |
Source IP ranges for remote SSH access in CIDR notation. Learn more. |
security.http_source_ranges |
list(string) |
[0.0.0.0/0] |
Source IP ranges for remote HTTP access in CIDR notation. Learn more. |
Variable | Type | Default | Description |
---|---|---|---|
network.routing |
string |
REGIONAL |
Network-wide routing mode: REGIONAL or GLOBAL . Learn more. |
network.tier |
string |
STANDARD |
Networking tier for network interfaces: STANDARD or PREMIUM . Learn more. |
network.id |
string |
projects/project-id/global/networks/network-name |
Existing network id , will be using only if network.new option is false . Learn more. |
network.auto |
bool |
false |
Create subnets in each region automatically: true or false . Learn more. |
network.mtu |
integer |
1500 |
Maximum transmission unit in bytes: 1460 - 1500. Learn more. |
network.new |
bool |
true |
Create a new network, or use an existing network: true or false . |
network.nat |
bool |
true |
Allow instances without external IP to communicate with the outside world: true or false . Learn more. |
Variable | Type | Default | Description |
---|---|---|---|
subnetwork.address |
string |
10.0.0.0/16 |
IP address range in CIDR notation of internal addresses for a new or existing subnetwork. |
subnetwork.private |
bool |
true |
When enabled VMs in this subnetwork without external IP addresses can access Google APIs and services by using Private Google Access: true or false . Learn more. |
subnetwork.id |
string |
projects/project-id/regions/region-name/subnetworks/subnetwork-name |
Existing subnetwork id , will be using only if subnetwork.new option is false . |
subnetwork.new |
bool |
true |
Create a new subnetwork, or use an existing subnetwork: true or false . |
Note: to provide access to the Google Cloud API, one of the following conditions must be met:
- the subnetwork must be configured with enabled Private Google Access
- all VM instances must have external IP addresses
- NAT option must be enabled
Variable | Type | Default | Description |
---|---|---|---|
boot.disk_type |
string |
pd-standard |
Boot disk type:
|
boot.script_url |
string |
null |
User defined startup script that is stored in Cloud Storage. Learn more. |
Variable | Type | Default | Description |
---|---|---|---|
image.project |
string |
ddn-public |
Source project name. Learn more. |
image.family |
string |
exascaler-cloud-6-3-rocky-linux-8 |
Source image family to create the virtual machine. EXAScaler Cloud 5.2 images:
|
Variable | Type | Default | Description |
---|---|---|---|
mgs.node_type |
string |
n2-standard-2 |
Type of management server. Learn more. |
mgs.node_cpu |
string |
Intel Cascade Lake |
CPU platform. Learn more. |
mgs.nic_type |
string |
GVNIC |
Type of network interface: GVNIC or VIRTIO_NET . Learn more. |
mgs.public_ip |
bool |
true |
Assign an external IP address: true or false . |
mgs.node_count |
integer |
1 |
Number of management servers: 1 . |
Variable | Type | Description | |
---|---|---|---|
mgt.disk_bus |
string |
SCSI |
Type of management target interface: SCSI or NVME (NVME can be used for scratch disks only). Learn more. |
mgt.disk_type |
string |
pd-standard |
Type of management target:
|
mgt.disk_iops |
integer |
null |
Provisioned IOPS, only for use with disks of type pd-extreme , hyperdisk-balanced or hyperdisk-extreme . |
mgt.disk_mbps |
integer |
null |
Provisioned throughput in MB per second, only for use with disks of type hyperdisk-balanced or hyperdisk-throughput . |
mgt.disk_size |
integer |
128 |
Size of management target in GB (ignored for scratch disks: local SSD size is 375GB). Learn more. |
mgt.disk_count |
integer |
1 |
Number of management targets: 1-128 . Learn more. |
mgt.disk_raid |
bool |
false |
Create striped management target: true or false . |
Variable | Type | Description | |
---|---|---|---|
mnt.disk_bus |
string |
SCSI |
Type of monitoring target interface: SCSI or NVME (NVME can be used for scratch disks only). Learn more. |
mnt.disk_type |
string |
pd-standard |
Type of monitoring target:
|
mnt.disk_iops |
integer |
null |
Provisioned IOPS, only for use with disks of type pd-extreme , hyperdisk-balanced or hyperdisk-extreme . |
mnt.disk_mbps |
integer |
null |
Provisioned throughput in MB per second, only for use with disks of type hyperdisk-balanced or hyperdisk-throughput . |
mnt.disk_size |
integer |
128 |
Size of monitoring target in GB (ignored for scratch disks: local SSD size is 375GB). Learn more. |
mnt.disk_count |
integer |
1 |
Number of monitoring targets: 1-128 . Learn more. |
mnt.disk_raid |
bool |
false |
Create striped management target: true or false . |
Variable | Type | Default | Description |
---|---|---|---|
mds.node_type |
string |
n2-standard-2 |
Type of metadata server. Learn more. |
mds.node_cpu |
string |
Intel Cascade Lake |
CPU platform. Learn more. |
mds.nic_type |
string |
GVNIC |
Type of network interface: GVNIC or VIRTIO_NET . Learn more. |
mds.public_ip |
bool |
true |
Assign an external IP address: true or false . |
mds.node_count |
integer |
1 |
Number of metadata servers: 1-32 . |
Variable | Type | Description | |
---|---|---|---|
mdt.disk_bus |
string |
SCSI |
Type of metadata target interface: SCSI or NVME (NVME can be used for scratch disks only). Learn more. |
mdt.disk_type |
string |
pd-ssd |
Type of metadata target:
|
mdt.disk_iops |
integer |
null |
Provisioned IOPS, only for use with disks of type pd-extreme , hyperdisk-balanced or hyperdisk-extreme . |
mdt.disk_mbps |
integer |
null |
Provisioned throughput in MB per second, only for use with disks of type hyperdisk-balanced or hyperdisk-throughput . |
mdt.disk_size |
integer |
256 |
Size of metadata target in GB (ignored for scratch disks: local SSD size is 375GB). Learn more. |
mdt.disk_count |
integer |
1 |
Number of metadata targets: 1-128 . Learn more. |
mdt.disk_raid |
bool |
false |
Create striped metadata target: true or false . |
Variable | Type | Default | Description |
---|---|---|---|
oss.node_type |
string |
n2-standard-2 |
Type of object storage server. Learn more. |
oss.node_cpu |
string |
Intel Cascade Lake |
CPU platform. Learn more. |
oss.nic_type |
string |
GVNIC |
Type of network interface: GVNIC or VIRTIO_NET . Learn more. |
oss.public_ip |
bool |
true |
Assign an external IP address: true or false . |
oss.node_count |
integer |
1 |
Number of object storage servers: 1-2000 . |
Variable | Type | Description | |
---|---|---|---|
ost.disk_bus |
string |
SCSI |
Type of object storage target interface: SCSI or NVME (NVME can be used for scratch disks only). Learn more. |
ost.disk_type |
string |
pd-standard |
Type of object storage target:
|
ost.disk_iops |
integer |
null |
Provisioned IOPS, only for use with disks of type pd-extreme , hyperdisk-balanced or hyperdisk-extreme . |
ost.disk_mbps |
integer |
null |
Provisioned throughput in MB per second, only for use with disks of type hyperdisk-balanced or hyperdisk-throughput . |
ost.disk_size |
integer |
512 |
Size of object storage target in GB (ignored for scratch disks: local SSD size is 375GB). Learn more. |
ost.disk_count |
integer |
1 |
Number of object storage targets: 1-128 . Learn more. |
ost.disk_raid |
bool |
false |
Create striped object storage target: true or false . |
Variable | Type | Default | Description |
---|---|---|---|
cls.node_type |
string |
n2-standard-2 |
Type of compute client. Learn more. |
cls.node_cpu |
string |
Intel Cascade Lake |
CPU platform. Learn more. |
cls.nic_type |
string |
GVNIC |
Type of network interface: GVNIC or VIRTIO_NET . Learn more. |
cls.public_ip |
bool |
true |
Assign an external IP address: true or false . |
cls.node_count |
integer |
1 |
Number of compute clients: 0 or more. |
Variable | Type | Description | |
---|---|---|---|
clt.disk_bus |
string |
SCSI |
Type of compute target interface: SCSI or NVME (NVME can be used for scratch disks only). Learn more. |
clt.disk_type |
string |
pd-standard |
Type of compute target:
|
clt.disk_iops |
integer |
null |
Provisioned IOPS, only for use with disks of type pd-extreme , hyperdisk-balanced or hyperdisk-extreme . |
clt.disk_mbps |
integer |
null |
Provisioned throughput in MB per second, only for use with disks of type hyperdisk-balanced or hyperdisk-throughput . |
clt.disk_size |
integer |
256 |
Size of compute target in GB (ignored for scratch disks: local SSD size is 375GB). Learn more. |
clt.disk_count |
integer |
0 |
Number of compute targets: 0-128 . Learn more. |
Initialize a working directory containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control. It is safe to run this command multiple times:
terraform init
Output:
$ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/google-beta...
- Finding latest version of hashicorp/null...
- Finding latest version of hashicorp/random...
- Finding latest version of hashicorp/template...
- Installing hashicorp/google-beta v4.1.0...
- Installed hashicorp/google-beta v4.1.0 (signed by HashiCorp)
- Installing hashicorp/null v3.1.0...
- Installed hashicorp/null v3.1.0 (signed by HashiCorp)
- Installing hashicorp/random v3.1.0...
- Installed hashicorp/random v3.1.0 (signed by HashiCorp)
- Installing hashicorp/template v2.2.0...
- Installed hashicorp/template v2.2.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Validate configuration options:
terraform validate
Output:
$ terraform validate
Success! The configuration is valid.
Create an execution plan with a preview of the changes that Terraform will make to the environment:
terraform plan
Apply the changes required to reach the desired state of the configuration:
terraform apply
Output:
$ terraform apply
...
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
...
Apply complete! Resources: 112 added, 0 changed, 0 destroyed.
Outputs:
http_console = "http://35.208.94.252"
mount_command = "mount -t lustre 10.0.0.22@tcp:/exacloud /mnt/exacloud"
private_addresses = {
"exascaler-cloud-2db9-cls0" = "10.0.0.19"
"exascaler-cloud-2db9-cls1" = "10.0.0.23"
"exascaler-cloud-2db9-cls2" = "10.0.0.21"
"exascaler-cloud-2db9-cls3" = "10.0.0.9"
"exascaler-cloud-2db9-cls4" = "10.0.0.25"
"exascaler-cloud-2db9-cls5" = "10.0.0.18"
"exascaler-cloud-2db9-cls6" = "10.0.0.20"
"exascaler-cloud-2db9-cls7" = "10.0.0.2"
"exascaler-cloud-2db9-mds0" = "10.0.0.24"
"exascaler-cloud-2db9-mgs0" = "10.0.0.22"
"exascaler-cloud-2db9-oss0" = "10.0.0.7"
"exascaler-cloud-2db9-oss1" = "10.0.0.3"
"exascaler-cloud-2db9-oss10" = "10.0.0.14"
"exascaler-cloud-2db9-oss11" = "10.0.0.4"
"exascaler-cloud-2db9-oss12" = "10.0.0.16"
"exascaler-cloud-2db9-oss13" = "10.0.0.11"
"exascaler-cloud-2db9-oss14" = "10.0.0.13"
"exascaler-cloud-2db9-oss15" = "10.0.0.27"
"exascaler-cloud-2db9-oss2" = "10.0.0.8"
"exascaler-cloud-2db9-oss3" = "10.0.0.17"
"exascaler-cloud-2db9-oss4" = "10.0.0.15"
"exascaler-cloud-2db9-oss5" = "10.0.0.26"
"exascaler-cloud-2db9-oss6" = "10.0.0.5"
"exascaler-cloud-2db9-oss7" = "10.0.0.12"
"exascaler-cloud-2db9-oss8" = "10.0.0.10"
"exascaler-cloud-2db9-oss9" = "10.0.0.6"
}
ssh_console = {
"exascaler-cloud-2db9-mgs0" = "ssh -A stack@35.208.94.252"
}
Now you can access the EXAScaler Cloud environment:
eval $(ssh-agent)
ssh-add
Output:
$ eval $(ssh-agent)
Agent pid 18111
$ ssh-add
Identity added: /home/user/.ssh/id_rsa
$ ssh -A stack@35.208.94.252
[stack@exascaler-cloud-2db9-mgs0 ~]$ df -h -t lustre
Filesystem Size Used Avail Use% Mounted on
/dev/sdb 124G 2.3M 123G 1% /mnt/targets/MGS
[stack@exascaler-cloud-2db9-mgs0 ~]$ loci hosts
10.0.0.19 exascaler-cloud-2db9-cls0
10.0.0.23 exascaler-cloud-2db9-cls1
10.0.0.21 exascaler-cloud-2db9-cls2
10.0.0.9 exascaler-cloud-2db9-cls3
10.0.0.25 exascaler-cloud-2db9-cls4
10.0.0.18 exascaler-cloud-2db9-cls5
10.0.0.20 exascaler-cloud-2db9-cls6
10.0.0.2 exascaler-cloud-2db9-cls7
10.0.0.24 exascaler-cloud-2db9-mds0
10.0.0.22 exascaler-cloud-2db9-mgs0
10.0.0.7 exascaler-cloud-2db9-oss0
10.0.0.3 exascaler-cloud-2db9-oss1
10.0.0.14 exascaler-cloud-2db9-oss10
10.0.0.4 exascaler-cloud-2db9-oss11
10.0.0.16 exascaler-cloud-2db9-oss12
10.0.0.11 exascaler-cloud-2db9-oss13
10.0.0.13 exascaler-cloud-2db9-oss14
10.0.0.27 exascaler-cloud-2db9-oss15
10.0.0.8 exascaler-cloud-2db9-oss2
10.0.0.17 exascaler-cloud-2db9-oss3
10.0.0.15 exascaler-cloud-2db9-oss4
10.0.0.26 exascaler-cloud-2db9-oss5
10.0.0.5 exascaler-cloud-2db9-oss6
10.0.0.12 exascaler-cloud-2db9-oss7
10.0.0.10 exascaler-cloud-2db9-oss8
10.0.0.6 exascaler-cloud-2db9-oss9
[stack@exascaler-cloud-2db9-mgs0 ~]$ ssh exascaler-cloud-2db9-cls0
[stack@exascaler-cloud-2db9-cls0 ~]$ lfs df
UUID 1K-blocks Used Available Use% Mounted on
exacloud-MDT0000_UUID 315302464 6020 309927736 1% /mnt/exacloud[MDT:0]
exacloud-OST0000_UUID 3712813504 1260 3675214900 1% /mnt/exacloud[OST:0]
exacloud-OST0001_UUID 3712813504 1264 3675214896 1% /mnt/exacloud[OST:1]
exacloud-OST0002_UUID 3712813504 1264 3675214896 1% /mnt/exacloud[OST:2]
exacloud-OST0003_UUID 3712813504 1268 3675214892 1% /mnt/exacloud[OST:3]
exacloud-OST0004_UUID 3712813504 1264 3675214896 1% /mnt/exacloud[OST:4]
exacloud-OST0005_UUID 3712813504 1256 3675214904 1% /mnt/exacloud[OST:5]
exacloud-OST0006_UUID 3712813504 1256 3675214904 1% /mnt/exacloud[OST:6]
exacloud-OST0007_UUID 3712813504 1260 3675214900 1% /mnt/exacloud[OST:7]
exacloud-OST0008_UUID 3712813504 1260 3675214900 1% /mnt/exacloud[OST:8]
exacloud-OST0009_UUID 3712813504 1260 3675214900 1% /mnt/exacloud[OST:9]
exacloud-OST000a_UUID 3712813504 1260 3675214900 1% /mnt/exacloud[OST:10]
exacloud-OST000b_UUID 3712813504 1268 3675214892 1% /mnt/exacloud[OST:11]
exacloud-OST000c_UUID 3712813504 1264 3675214896 1% /mnt/exacloud[OST:12]
exacloud-OST000d_UUID 3712813504 1268 3675214892 1% /mnt/exacloud[OST:13]
exacloud-OST000e_UUID 3712813504 1264 3675214896 1% /mnt/exacloud[OST:14]
exacloud-OST000f_UUID 3712813504 1256 3675214904 1% /mnt/exacloud[OST:15]
filesystem_summary: 59405016064 20192 58803438368 1% /mnt/exacloud
The storage capacity can be added by increasing the number of storage servers. To add storage capacity in an existing EXAScaler Cloud environment, just modify the terraform.tfvars
file and increase the number of object storage servers (the value of the oss.node_count
variable) as required:
$ diff -u terraform.tfvars.orig terraform.tfvars
--- terraform.tfvars.orig 2021-12-01 20:11:30.000000000 +0300
+++ terraform.tfvars 2021-12-01 20:11:43.000000000 +0300
@@ -202,7 +202,7 @@
node_cpu = "Intel Cascade Lake"
nic_type = "GVNIC"
public_ip = false
- node_count = 16
+ node_count = 24
}
# Object Storage target properties
And then run the terraform apply
command to increase the storage capacity. The available storage capacity (in GB) can be calculated by multiplying the three configuration parameters:
capacity = oss.node_count * ost.disk_count * ost.disk_size
A software upgrade for an existing EXAScaler Cloud environment is possible by recreating the running VM instances using a new version of the OS image. And it requires some manual steps.
Create a backup copy for the existing Terraform directory (*.tf
, terraform.tfvars
and terraform.tfstate
files):
cd /path/to/exascaler-cloud-terraform-scripts-x.y.z/gcp
tar pcfz backup.tgz *.tf terraform.tfvars terraform.tfstate
Update Terraform scripts using the latest available EXAScaler Cloud Terraform scripts:
cd /path/to
curl -sL https://github.com/DDNStorage/exascaler-cloud-terraform/archive/refs/tags/scripts/2.2.0.tar.gz | tar xz
cd exascaler-cloud-terraform-scripts-2.2.0/gcp
Copy the terraform.tfstate
file from the existing Terraform directory:
cp -iv /path/to/exascaler-cloud-terraform-scripts-x.y.z/gcp/terraform.tfstate .
Review and update the terraform.tfvars
file using configuration options for the existing environment:
diff -u /path/to/exascaler-cloud-terraform-scripts-x.y.z/az/terraform.tfvars terraform.tfvars
vi terraform.tfvars
Review the execution plan to make sure all changes are expected:
terraform plan
Apply the changes required to upgrade the existing EXAScaler Cloud environment by recreating all instances using the latest version of EXAScaler Cloud image:
terraform apply
Steps to run IOR benchmark on the EXAScaler Cloud deployment:
- Run ssh-agent
- Add ssh private key
- Open an SSH session to the EXAScaler Cloud management server
- Run IOR benchmark using the
exascaler-cloud-ior
command
eval $(ssh-agent)
ssh-add
ssh -A stack@35.208.94.252
exascaler-cloud-ior
Output:
$ eval $(ssh-agent)
Agent pid 97037
$ ssh-add
Identity added: /home/user/.ssh/id_rsa
$ ssh -A stack@35.208.94.252
[stack@exascaler-cloud-2db9-mgs0 ~]$ exascaler-cloud-ior
IOR-3.3.0: MPI Coordinated Test of Parallel I/O
Began : Wed Dec 1 17:29:55 2021
Command line : /usr/bin/ior -C -F -e -r -w -a POSIX -b 16777216 -t 1048576 -s 539 -o /mnt/exacloud/ceb67656ef7da04e/ceb67656ef7da04e
Machine : Linux exascaler-cloud-2db9-cls0
TestID : 0
StartTime : Wed Dec 1 17:29:55 2021
Path : /mnt/exacloud/ceb67656ef7da04e
FS : 55.3 TiB Used FS: 0.0% Inodes: 204.8 Mi Used Inodes: 0.0%
Options:
api : POSIX
apiVersion :
test filename : /mnt/exacloud/ceb67656ef7da04e/ceb67656ef7da04e
access : file-per-process
type : independent
segments : 539
ordering in a file : sequential
ordering inter file : constant task offset
task offset : 1
nodes : 8
tasks : 64
clients per node : 8
repetitions : 1
xfersize : 1 MiB
blocksize : 16 MiB
aggregate filesize : 539 GiB
Results:
access bw(MiB/s) IOPS Latency(s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---- ---------- ---------- --------- -------- -------- -------- -------- ----
write 6335 6335 5.44 16384 1024.00 0.016925 87.13 3.56 87.13 0
read 7438 7438 4.59 16384 1024.00 0.081018 74.20 20.08 74.20 0
remove - - - - - - - - 1.30 0
Max Write: 6334.77 MiB/sec (6642.49 MB/sec)
Max Read: 7438.33 MiB/sec (7799.66 MB/sec)
Summary of all tests:
Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Max(OPs) Min(OPs) Mean(OPs) StdDev Mean(s) Stonewall(s) Stonewall(MiB) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggs(MiB) API RefNum
write 6334.77 6334.77 6334.77 0.00 6334.77 6334.77 6334.77 0.00 87.12805 NA NA 0 64 8 1 1 1 1 0 0 539 16777216 1048576 551936.0 POSIX 0
read 7438.33 7438.33 7438.33 0.00 7438.33 7438.33 7438.33 0.00 74.20155 NA NA 0 64 8 1 1 1 1 0 0 539 16777216 1048576 551936.0 POSIX 0
Finished : Wed Dec 1 17:32:38 2021
Steps to run mdtest benchmark on the EXAScaler Cloud deployment:
- Run ssh-agent
- Add ssh private key
- Open an SSH session to the EXAScaler Cloud management server
- Run mdtest benchmark using the
exascaler-cloud-mdtest
command
eval $(ssh-agent)
ssh-add
ssh -A stack@35.208.94.252
exascaler-cloud-mdtest
Output:
$ eval $(ssh-agent)
Agent pid 97079
$ ssh-add
Identity added: /home/user/.ssh/id_rsa
$ ssh -A stack@35.208.94.252
[stack@exascaler-cloud-2db9-mgs0 ~]$ exascaler-cloud-mdtest
-- started at 12/01/2021 17:34:01 --
mdtest-3.3.0 was launched with 64 total task(s) on 8 node(s)
Command line used: /usr/bin/mdtest '-n' '2048' '-i' '3' '-d' '/mnt/exacloud/b10eab2f2e7ccbd3'
Path: /mnt/exacloud
FS: 55.3 TiB Used FS: 0.0% Inodes: 204.8 Mi Used Inodes: 0.0%
Nodemap: 1111111100000000000000000000000000000000000000000000000000000000
64 tasks, 131072 files/directories
SUMMARY rate: (of 3 iterations)
Operation Max Min Mean Std Dev
--------- --- --- ---- -------
Directory creation : 26871.818 16015.007 22434.966 4648.316
Directory stat : 36404.916 33826.426 34857.190 1113.203
Directory removal : 28108.093 24667.200 26016.682 1498.696
File creation : 22421.537 13454.293 19186.060 4063.924
File stat : 47499.280 46180.116 46829.212 536.436
File read : 28638.415 28146.821 28323.202 222.232
File removal : 18023.544 17765.493 17866.900 111.632
Tree creation : 2113.490 1218.897 1728.448 375.678
Tree removal : 276.874 155.710 229.802 53.027
-- finished at 12/01/2021 17:35:52 --
Steps to run IO500 benchmark on the EXAScaler Cloud deployment:
- Run ssh-agent
- Add ssh private key
- Open an SSH session to the EXAScaler Cloud management server
- Open an SSH session to the any EXAScaler Cloud compute host
- Run IO500 benchmark using the
exascaler-cloud-io500
command
eval $(ssh-agent)
ssh-add
ssh -A stack@35.208.94.252
loci hosts -c
ssh -A exascaler-cloud-2db9-cls0
exascaler-cloud-io500
Output:
$ eval $(ssh-agent)
Agent pid 97092
$ ssh-add
Identity added: /home/user/.ssh/id_rsa
$ ssh -A stack@35.208.94.252
[stack@exascaler-cloud-2db9-mgs0 ~]$ loci hosts
10.0.0.19 exascaler-cloud-2db9-cls0
10.0.0.23 exascaler-cloud-2db9-cls1
10.0.0.21 exascaler-cloud-2db9-cls2
10.0.0.9 exascaler-cloud-2db9-cls3
10.0.0.25 exascaler-cloud-2db9-cls4
10.0.0.18 exascaler-cloud-2db9-cls5
10.0.0.20 exascaler-cloud-2db9-cls6
10.0.0.2 exascaler-cloud-2db9-cls7
10.0.0.24 exascaler-cloud-2db9-mds0
10.0.0.22 exascaler-cloud-2db9-mgs0
10.0.0.7 exascaler-cloud-2db9-oss0
10.0.0.3 exascaler-cloud-2db9-oss1
10.0.0.14 exascaler-cloud-2db9-oss10
10.0.0.4 exascaler-cloud-2db9-oss11
10.0.0.16 exascaler-cloud-2db9-oss12
10.0.0.11 exascaler-cloud-2db9-oss13
10.0.0.13 exascaler-cloud-2db9-oss14
10.0.0.27 exascaler-cloud-2db9-oss15
10.0.0.8 exascaler-cloud-2db9-oss2
10.0.0.17 exascaler-cloud-2db9-oss3
10.0.0.15 exascaler-cloud-2db9-oss4
10.0.0.26 exascaler-cloud-2db9-oss5
10.0.0.5 exascaler-cloud-2db9-oss6
10.0.0.12 exascaler-cloud-2db9-oss7
10.0.0.10 exascaler-cloud-2db9-oss8
10.0.0.6 exascaler-cloud-2db9-oss9
$ ssh -A exascaler-cloud-2db9-cls0
[stack@exascaler-cloud-2db9-cls0 ~]$ exascaler-cloud-io500
Build IO500 package
Start IO500 benchmark with options:
Data directory: /mnt/exacloud/e143a2b031294f51/workload
Hosts list: 10.0.0.19,10.0.0.23,10.0.0.21,10.0.0.9,10.0.0.25,10.0.0.18,10.0.0.20,10.0.0.2
Processes per host: 8
Files per process: 129281
Number of tasks: 64
Number of segments: 459375
Block size: 56371445760
Transfer size: 1048576
IO500 version io500-sc20_v3
[RESULT] ior-easy-write 6.173493 GiB/s : time 329.228 seconds
[RESULT] mdtest-easy-write 21.002955 kIOPS : time 401.247 seconds
[RESULT] ior-hard-write 0.328757 GiB/s : time 544.235 seconds
[RESULT] mdtest-hard-write 9.818279 kIOPS : time 369.694 seconds
[RESULT] find 255.641154 kIOPS : time 46.224 seconds
[RESULT] ior-easy-read 6.647157 GiB/s : time 305.766 seconds
[RESULT] mdtest-easy-stat 63.395256 kIOPS : time 130.730 seconds
[RESULT] ior-hard-read 0.902677 GiB/s : time 198.271 seconds
[RESULT] mdtest-hard-stat 27.472745 kIOPS : time 128.768 seconds
[RESULT] mdtest-easy-delete 11.505903 kIOPS : time 719.628 seconds
[RESULT] mdtest-hard-read 10.385374 kIOPS : time 340.405 seconds
[RESULT] mdtest-hard-delete 7.021386 kIOPS : time 503.496 seconds
[SCORE] Bandwidth 1.868072 GiB/s : IOPS 22.952705 kiops : TOTAL 6.548076
The result files are stored in the directory: ./results/2021.12.01-18.23.49
Warning: please create a 'system-information.txt' description by
copying the information from https://vi4io.org/io500-info-creator/
‘./io500.sh’ -> ‘./results/2021.12.01-18.23.49/io500.sh’
‘config.ini’ -> ‘./results/2021.12.01-18.23.49/config.ini’
Created result tarball ./results/io500-exascaler-cloud-2db9-cls0-2021.12.01-18.23.49.tgz
/mnt/exacloud/e143a2b031294f51/sources/results
2021.12.01-18.23.49 io500-exascaler-cloud-2db9-cls0-2021.12.01-18.23.49.tgz
New EXAScaler Cloud client instances must be created in the same location and connected to the same virtual network and subnet. To allow network connections from new clients to EXAScaler Cloud servers, you must assign a specific network tag to client instances and the tag name is the same as the deployment prefix (for example, exascaler-cloud-2db9
). The process of installing and configuring new clients can be performed automatically. All required information is contained in the Terraform output. To configure EXAScaler Cloud filesystem on a new client instance create a configuration file /etc/exascaler-cloud-client.cfg
using the actual IP address of the management server:
{
"MountConfig": {
"ClientDevice": "10.0.0.10@tcp:/exacloud",
"Mountpoint": "/mnt/exacloud",
"PackageSource": "http://10.0.0.10/client-packages"
}
}
To install and setup EXAScaler Cloud filesystem on a new client run the following commands on the client with root privileges:
curl -fsSL http://10.0.0.10/exascaler-cloud-client-$(arch) -o /usr/sbin/exascaler-cloud-client
chmod +x /usr/sbin/exascaler-cloud-client
/usr/sbin/exascaler-cloud-client auto setup --config /etc/exascaler-cloud-client.cfg
# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04 LTS
Release: 22.04
Codename: jammy
# /usr/sbin/exascaler-cloud-client auto setup --config /etc/exascaler-cloud-client.cfg
Discovering platform ... Done.
Configuring firewall rules for Lustre ... Done.
Configuring Lustre client package source ... Done.
Installing Lustre client packages and building DKMS modules ... Done.
Mounting 10.0.0.10@tcp0:/exacloud at /mnt/exacloud ... Done.
# mount -t lustre
10.0.0.10@tcp:/exacloud on /mnt/exacloud type lustre (rw,flock,user_xattr,lazystatfs,encrypt)
# cat /etc/redhat-release
AlmaLinux release 8.6 (Sky Tiger)
# /usr/sbin/exascaler-cloud-client auto setup --config /etc/exascaler-cloud-client.cfg
Discovering platform ... Done.
Configuring firewall rules for Lustre ... Done.
Configuring Lustre client package source ... Done.
Installing Lustre client packages ... Done.
Mounting 10.0.0.10@tcp0:/exacloud at /mnt/exacloud ... Done.
# mount -t lustre
10.0.0.10@tcp:/exacloud on /mnt/exacloud type lustre (rw,seclabel,flock,user_xattr,lazystatfs,encrypt)
The purpose that client-side encryption wants to serve is to be able to provide a special directory for each user, to safely store sensitive files. The goals are to protect data in transit between clients and servers, and protect data at rest.
This feature is implemented directly at the Lustre client level. Lustre client-side encryption relies on kernel fscrypt. fscrypt is a library which filesystems can hook into to support transparent encryption of files and directories. As a consequence, the key points described below are extracted from fscrypt documentation.
The client-side encryption feature is available natively on Lustre clients running a Linux distributions, including RHEL/CentOS 8.1 and later, Ubuntu 18.04 and later.
Client-side encryption supports data encryption and file and directory names encryption. Ability to encrypt file and directory names is governed by parameter named enable_filename_encryption
and set to 0
by default. When this parameter is 0
, new empty directories configured as encrypted use content encryption only, and not name encryption. This mode is inherited for all subdirectories and files. When enable_filename_encryption
parameter is set to 1
, new empty directories configured as encrypted use full encryption capabilities by encrypting file content and also file and directory names. This mode is inherited for all subdirectories and files. To set the enable_filename_encryption
parameter globally for all clients, one can do on the management server:
lctl set_param -P llite.*.enable_filename_encryption=1
The fscrypt package is included in the EXAScaler Cloud client toolkit and can be installed using the exascaler-cloud-client
command.
Steps to install Lustre client and fscrypt packages:
cat > /etc/exascaler-cloud-client.cfg <<EOF
{
"MountConfig": {
"ClientDevice": "10.0.0.10@tcp:/exacloud",
"Mountpoint": "/mnt/exacloud",
"PackageSource": "http://10.0.0.10/client-packages"
}
}
EOF
curl -fsSL http://10.0.0.10/exascaler-cloud-client-$(arch) -o /usr/sbin/exascaler-cloud-client
chmod +x /usr/sbin/exascaler-cloud-client
/usr/sbin/exascaler-cloud-client auto setup --config /etc/exascaler-cloud-client.cfg
Output:
# /usr/sbin/exascaler-cloud-client auto setup --config /etc/exascaler-cloud-client.cfg
Discovering platform ... Done.
Configuring firewall rules for Lustre ... Done.
Configuring Lustre client package source ... Done.
Installing Lustre client packages ... Done.
Mounting 10.0.0.10@tcp0:/exacloud at /mnt/exacloud ... Done.
# rpm -q fscrypt lustre-client kmod-lustre-client
fscrypt-0.3.3-1.wc2.x86_64
lustre-client-2.14.0_ddn52-1.el8.x86_64
kmod-lustre-client-2.14.0_ddn52-1.el8.x86_64
Steps to configure client-side encryption:
$ sudo fscrypt setup
Defaulting to policy_version 2 because kernel supports it.
Customizing passphrase hashing difficulty for this system...
Created global config file at "/etc/fscrypt.conf".
Allow users other than root to create fscrypt metadata on the root filesystem? (See
https://github.com/google/fscrypt#setting-up-fscrypt-on-a-filesystem) [y/N]
Metadata directories created at "/.fscrypt", writable by root only.
$ sudo fscrypt setup /mnt/exacloud
Allow users other than root to create fscrypt metadata on this filesystem? (See
https://github.com/google/fscrypt#setting-up-fscrypt-on-a-filesystem) [y/N] y
Metadata directories created at "/mnt/exacloud/.fscrypt", writable by everyone.
Steps to encrypt directory:
$ sudo install -v -d -m 0755 -o stack -g stack /mnt/exacloud/stack
install: creating directory '/mnt/exacloud/stack'
$ fscrypt encrypt /mnt/exacloud/stack
The following protector sources are available:
1 - Your login passphrase (pam_passphrase)
2 - A custom passphrase (custom_passphrase)
3 - A raw 256-bit key (raw_key)
Enter the source number for the new protector [2 - custom_passphrase]:
Enter a name for the new protector: test
Enter custom passphrase for protector "test":
Confirm passphrase:
"/mnt/exacloud/stack" is now encrypted, unlocked, and ready for use.
$ cp -v /etc/passwd /mnt/exacloud/stack/
'/etc/passwd' -> '/mnt/exacloud/stack/passwd'
$ ls -l /mnt/exacloud/stack/
total 1
-rw-r--r--. 1 stack stack 1610 Jul 13 20:34 passwd
$ md5sum /mnt/exacloud/stack/passwd
867541523c51f8cfd4af91988e9f8794 /mnt/exacloud/stack/passwd
Lock the directory:
$ fscrypt lock /mnt/exacloud/stack
"/mnt/exacloud/stack" is now locked.
$ ls -l /mnt/exacloud/stack
total 4
-rw-r--r--. 1 stack stack 4096 Jul 13 20:34 ydpdwRP7MiXzsTkYhg0mW3DNacDlsUJdJa2e9l6AQKL
$ md5sum /mnt/exacloud/stack/ydpdwRP7MiXzsTkYhg0mW3DNacDlsUJdJa2e9l6AQKL
md5sum: /mnt/exacloud/stack/ydpdwRP7MiXzsTkYhg0mW3DNacDlsUJdJa2e9l6AQKL: Required key not available
Unlock the directory:
$ fscrypt unlock /mnt/exacloud/stack
Enter custom passphrase for protector "test":
"/mnt/exacloud/stack" is now unlocked and ready for use.
$ ls -l /mnt/exacloud/stack
total 4
-rw-r--r--. 1 stack stack 1610 Jul 13 20:34 passwd
$ md5sum /mnt/exacloud/stack/passwd
867541523c51f8cfd4af91988e9f8794 /mnt/exacloud/stack/passwd
Learn more about client-side encryption.
Steps to collect a support bundle on the EXAScaler Cloud deployment:
- Run ssh-agent
- Add ssh private key
- Open an SSH session to the EXAScaler Cloud management server
- Collect an inventory using
about_this_deployment
tool - Collect a support bundle using the
exascaler-cloud-collector
command
eval $(ssh-agent)
ssh-add
ssh -A stack@35.208.94.252
about_this_deployment
exascaler-cloud-collector
Output:
$ eval $(ssh-agent)
Agent pid 97351
$ ssh-add
Identity added: /home/user/.ssh/id_rsa
$ ssh -A stack@35.208.94.252
[stack@exascaler-cloud-2db9-mgs0 ~]$ about_this_deployment
cloud: Google Compute Engine
zone: us-central1-f
project: exascaler-on-gcp
deployment: exascaler-cloud-2db9
filesystem: exacloud
capacityGB: 57344
profile: custom
instances:
- id: 550026747925107041
instanceName: exascaler-cloud-2db9-oss9
instanceType: n2-standard-8
cpuPlatform: Intel Cascade Lake
role: ost
interfaces:
- name: nic0
type: GVNIC
network: exascaler-cloud-2db9-network
subnet: exascaler-cloud-2db9-subnetwork
privateIpAddress: 10.0.0.6
disks:
- blockSize: 4096
status: READY
sourceImage: exascaler-cloud-v523-centos7
mode: READ_WRITE
bus: SCSI
boot: true
autoDelete: true
lun: 0
sizeGB: 20
name: exascaler-cloud-2db9-oss9-boot-disk
tier: PERSISTENT
type: pd-standard
- role: ost
blockSize: 4096
status: READY
mode: READ_WRITE
bus: SCSI
boot: false
autoDelete: false
lun: 1
sizeGB: 3584
name: exascaler-cloud-2db9-oss9-ost0-disk
tier: PERSISTENT
type: pd-standard
status: RUNNING
tags:
- exascaler-cloud-2db9
metadata:
- key: block-project-ssh-keys
value: true
...
[stack@exascaler-cloud-2db9-mgs0 ~]$ exascaler-cloud-collector
sos-collector (version 1.8)
This utility is used to collect sosreports from multiple nodes simultaneously.
It uses OpenSSH's ControlPersist feature to connect to nodes and run commands
remotely. If your system installation of OpenSSH is older than 5.6, please
upgrade.
An archive of sosreport tarballs collected from the nodes will be generated in
/var/tmp/sos-collector-OfipHI and may be provided to an appropriate support
representative.
The following is a list of nodes to collect from:
exascaler-cloud-2db9-cls0
exascaler-cloud-2db9-cls1
exascaler-cloud-2db9-cls2
exascaler-cloud-2db9-cls3
exascaler-cloud-2db9-cls4
exascaler-cloud-2db9-cls5
exascaler-cloud-2db9-cls6
exascaler-cloud-2db9-cls7
exascaler-cloud-2db9-mds0
exascaler-cloud-2db9-mgs0
exascaler-cloud-2db9-oss0
exascaler-cloud-2db9-oss1
exascaler-cloud-2db9-oss10
exascaler-cloud-2db9-oss11
exascaler-cloud-2db9-oss12
exascaler-cloud-2db9-oss13
exascaler-cloud-2db9-oss14
exascaler-cloud-2db9-oss15
exascaler-cloud-2db9-oss2
exascaler-cloud-2db9-oss3
exascaler-cloud-2db9-oss4
exascaler-cloud-2db9-oss5
exascaler-cloud-2db9-oss6
exascaler-cloud-2db9-oss7
exascaler-cloud-2db9-oss8
exascaler-cloud-2db9-oss9
Connecting to nodes...
Beginning collection of sosreports from 26 nodes, collecting a maximum of 2 concurrently
exascaler-cloud-2db9-mgs0 : Generating sosreport...
exascaler-cloud-2db9-oss7 : Generating sosreport...
exascaler-cloud-2db9-oss7 : Retrieving sosreport...
exascaler-cloud-2db9-oss7 : Successfully collected sosreport
exascaler-cloud-2db9-mgs0 : Retrieving sosreport...
exascaler-cloud-2db9-mgs0 : Successfully collected sosreport
exascaler-cloud-2db9-oss6 : Generating sosreport...
exascaler-cloud-2db9-oss5 : Generating sosreport...
exascaler-cloud-2db9-oss5 : Retrieving sosreport...
exascaler-cloud-2db9-oss6 : Retrieving sosreport...
exascaler-cloud-2db9-oss5 : Successfully collected sosreport
exascaler-cloud-2db9-oss6 : Successfully collected sosreport
exascaler-cloud-2db9-oss4 : Generating sosreport...
exascaler-cloud-2db9-oss3 : Generating sosreport...
exascaler-cloud-2db9-oss3 : Retrieving sosreport...
exascaler-cloud-2db9-oss4 : Retrieving sosreport...
exascaler-cloud-2db9-oss3 : Successfully collected sosreport
exascaler-cloud-2db9-oss4 : Successfully collected sosreport
exascaler-cloud-2db9-oss2 : Generating sosreport...
exascaler-cloud-2db9-oss1 : Generating sosreport...
exascaler-cloud-2db9-oss2 : Retrieving sosreport...
exascaler-cloud-2db9-oss1 : Retrieving sosreport...
exascaler-cloud-2db9-oss2 : Successfully collected sosreport
exascaler-cloud-2db9-oss1 : Successfully collected sosreport
exascaler-cloud-2db9-oss0 : Generating sosreport...
exascaler-cloud-2db9-oss8 : Generating sosreport...
exascaler-cloud-2db9-oss8 : Retrieving sosreport...
exascaler-cloud-2db9-oss0 : Retrieving sosreport...
exascaler-cloud-2db9-oss8 : Successfully collected sosreport
exascaler-cloud-2db9-oss0 : Successfully collected sosreport
exascaler-cloud-2db9-oss9 : Generating sosreport...
exascaler-cloud-2db9-oss15 : Generating sosreport...
exascaler-cloud-2db9-oss15 : Retrieving sosreport...
exascaler-cloud-2db9-oss15 : Successfully collected sosreport
exascaler-cloud-2db9-mds0 : Generating sosreport...
exascaler-cloud-2db9-oss9 : Retrieving sosreport...
exascaler-cloud-2db9-oss9 : Successfully collected sosreport
exascaler-cloud-2db9-oss14 : Generating sosreport...
exascaler-cloud-2db9-oss14 : Retrieving sosreport...
exascaler-cloud-2db9-oss14 : Successfully collected sosreport
exascaler-cloud-2db9-oss13 : Generating sosreport...
exascaler-cloud-2db9-mds0 : Retrieving sosreport...
exascaler-cloud-2db9-mds0 : Successfully collected sosreport
exascaler-cloud-2db9-oss12 : Generating sosreport...
exascaler-cloud-2db9-oss13 : Retrieving sosreport...
exascaler-cloud-2db9-oss13 : Successfully collected sosreport
exascaler-cloud-2db9-oss11 : Generating sosreport...
exascaler-cloud-2db9-oss12 : Retrieving sosreport...
exascaler-cloud-2db9-oss12 : Successfully collected sosreport
exascaler-cloud-2db9-cls0 : Generating sosreport...
exascaler-cloud-2db9-oss11 : Retrieving sosreport...
exascaler-cloud-2db9-oss11 : Successfully collected sosreport
exascaler-cloud-2db9-oss10 : Generating sosreport...
exascaler-cloud-2db9-cls0 : Retrieving sosreport...
exascaler-cloud-2db9-cls0 : Successfully collected sosreport
exascaler-cloud-2db9-cls1 : Generating sosreport...
exascaler-cloud-2db9-oss10 : Retrieving sosreport...
exascaler-cloud-2db9-oss10 : Successfully collected sosreport
exascaler-cloud-2db9-cls2 : Generating sosreport...
exascaler-cloud-2db9-cls1 : Retrieving sosreport...
exascaler-cloud-2db9-cls1 : Successfully collected sosreport
exascaler-cloud-2db9-cls3 : Generating sosreport...
exascaler-cloud-2db9-cls2 : Retrieving sosreport...
exascaler-cloud-2db9-cls2 : Successfully collected sosreport
exascaler-cloud-2db9-cls4 : Generating sosreport...
exascaler-cloud-2db9-cls3 : Retrieving sosreport...
exascaler-cloud-2db9-cls3 : Successfully collected sosreport
exascaler-cloud-2db9-cls5 : Generating sosreport...
exascaler-cloud-2db9-cls4 : Retrieving sosreport...
exascaler-cloud-2db9-cls4 : Successfully collected sosreport
exascaler-cloud-2db9-cls6 : Generating sosreport...
exascaler-cloud-2db9-cls5 : Retrieving sosreport...
exascaler-cloud-2db9-cls5 : Successfully collected sosreport
exascaler-cloud-2db9-cls7 : Generating sosreport...
exascaler-cloud-2db9-cls6 : Retrieving sosreport...
exascaler-cloud-2db9-cls6 : Successfully collected sosreport
exascaler-cloud-2db9-cls7 : Retrieving sosreport...
exascaler-cloud-2db9-cls7 : Successfully collected sosreport
Successfully captured 26 of 26 sosreports
Creating archive of sosreports...
The following archive has been created. Please provide it to your support team.
/var/tmp/sos-collector-2021-12-01-lyowl.tar.gz
The terraform destroy
command is a convenient way to destroy all remote objects managed by a particular Terraform configuration:
terraform destroy
Output:
$ terraform destroy
...
Enter a value: yes
...
Destroy complete! Resources: 200 destroyed.