module.slurm_login.module.slurm_login_template.data.local_file.startup: Reading... module.slurm_controller.module.slurm_controller_template.data.local_file.startup: Reading... module.slurm_controller.module.slurm_controller_instance.data.local_file.slurmdbd_conf_tpl: Reading... module.slurm_controller.module.slurm_controller_instance.data.local_file.cgroup_conf_tpl: Reading... module.slurm_controller.module.slurm_controller_instance.data.local_file.slurm_conf_tpl: Reading... module.slurm_controller.module.slurm_controller_instance.data.local_file.slurm_conf_tpl: Read complete after 0s [id=e8e7e4b694602db3b2c9420e455362df2d8bde77] module.slurm_controller.module.slurm_controller_instance.data.local_file.cgroup_conf_tpl: Read complete after 0s [id=fe856d7ed3738c1cc82a2e7bca11f65de71f1e3b] module.slurm_login.module.slurm_login_template.data.local_file.startup: Read complete after 0s [id=de68d872e4df054209706dbeee9bfec9dca89970] module.slurm_controller.module.slurm_controller_instance.data.local_file.slurmdbd_conf_tpl: Read complete after 0s [id=45039ec29ef691f55da128333837adcb28bd19a0] module.slurm_controller.module.slurm_controller_template.data.local_file.startup: Read complete after 0s [id=de68d872e4df054209706dbeee9bfec9dca89970] module.hpc_network.data.google_compute_network.vpc: Reading... module.slurm_login.data.google_compute_default_service_account.default: Reading... module.slurm_controller.data.google_compute_default_service_account.default: Reading... module.partition_0-group.data.google_compute_default_service_account.default: Reading... module.partition_0.module.slurm_partition.data.google_compute_subnetwork.partition_subnetwork: Reading... module.partition_0.data.google_compute_zones.available: Reading... module.hpc_network.data.google_compute_subnetwork.primary_subnetwork: Reading... module.partition_0.module.slurm_partition.data.google_compute_subnetwork.partition_subnetwork: Read complete after 0s [id=projects/ucr-ursa-major-hpc-club/regions/us-central1/subnetworks/default] module.hpc_network.data.google_compute_network.vpc: Read complete after 0s [id=projects/ucr-ursa-major-hpc-club/global/networks/default] module.hpc_network.data.google_compute_subnetwork.primary_subnetwork: Read complete after 0s [id=projects/ucr-ursa-major-hpc-club/regions/us-central1/subnetworks/default] module.partition_0.data.google_compute_zones.available: Read complete after 0s [id=projects/ucr-ursa-major-hpc-club/regions/us-central1] module.partition_0-group.data.google_compute_default_service_account.default: Read complete after 0s [id=projects/ucr-ursa-major-hpc-club/serviceAccounts/5206832155-compute@developer.gserviceaccount.com] module.slurm_controller.data.google_compute_default_service_account.default: Read complete after 0s [id=projects/ucr-ursa-major-hpc-club/serviceAccounts/5206832155-compute@developer.gserviceaccount.com] module.partition_0.module.slurm_partition.module.slurm_compute_template["ghpc"].data.local_file.startup: Reading... module.slurm_login.data.google_compute_default_service_account.default: Read complete after 0s [id=projects/ucr-ursa-major-hpc-club/serviceAccounts/5206832155-compute@developer.gserviceaccount.com] module.partition_0.module.slurm_partition.module.slurm_compute_template["ghpc"].data.local_file.startup: Read complete after 0s [id=de68d872e4df054209706dbeee9bfec9dca89970] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create <= read (data resources) Terraform will perform the following actions: # module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/compute.instanceAdmin.v1"] will be created + resource "google_project_iam_member" "project-roles" { + etag = (known after apply) + id = (known after apply) + member = (known after apply) + project = "ucr-ursa-major-hpc-club" + role = "roles/compute.instanceAdmin.v1" } # module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/compute.networkAdmin"] will be created + resource "google_project_iam_member" "project-roles" { + etag = (known after apply) + id = (known after apply) + member = (known after apply) + project = "ucr-ursa-major-hpc-club" + role = "roles/compute.networkAdmin" } # module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/compute.securityAdmin"] will be created + resource "google_project_iam_member" "project-roles" { + etag = (known after apply) + id = (known after apply) + member = (known after apply) + project = "ucr-ursa-major-hpc-club" + role = "roles/compute.securityAdmin" } # module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/iam.serviceAccountAdmin"] will be created + resource "google_project_iam_member" "project-roles" { + etag = (known after apply) + id = (known after apply) + member = (known after apply) + project = "ucr-ursa-major-hpc-club" + role = "roles/iam.serviceAccountAdmin" } # module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/iam.serviceAccountUser"] will be created + resource "google_project_iam_member" "project-roles" { + etag = (known after apply) + id = (known after apply) + member = (known after apply) + project = "ucr-ursa-major-hpc-club" + role = "roles/iam.serviceAccountUser" } # module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/logging.logWriter"] will be created + resource "google_project_iam_member" "project-roles" { + etag = (known after apply) + id = (known after apply) + member = (known after apply) + project = "ucr-ursa-major-hpc-club" + role = "roles/logging.logWriter" } # module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/monitoring.metricWriter"] will be created + resource "google_project_iam_member" "project-roles" { + etag = (known after apply) + id = (known after apply) + member = (known after apply) + project = "ucr-ursa-major-hpc-club" + role = "roles/monitoring.metricWriter" } # module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/pubsub.publisher"] will be created + resource "google_project_iam_member" "project-roles" { + etag = (known after apply) + id = (known after apply) + member = (known after apply) + project = "ucr-ursa-major-hpc-club" + role = "roles/pubsub.publisher" } # module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/pubsub.subscriber"] will be created + resource "google_project_iam_member" "project-roles" { + etag = (known after apply) + id = (known after apply) + member = (known after apply) + project = "ucr-ursa-major-hpc-club" + role = "roles/pubsub.subscriber" } # module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/resourcemanager.projectIamAdmin"] will be created + resource "google_project_iam_member" "project-roles" { + etag = (known after apply) + id = (known after apply) + member = (known after apply) + project = "ucr-ursa-major-hpc-club" + role = "roles/resourcemanager.projectIamAdmin" } # module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/storage.objectAdmin"] will be created + resource "google_project_iam_member" "project-roles" { + etag = (known after apply) + id = (known after apply) + member = (known after apply) + project = "ucr-ursa-major-hpc-club" + role = "roles/storage.objectAdmin" } # module.hpc_service_account.module.service_accounts.google_service_account.service_accounts["alpha-9ee243ed-sa"] will be created + resource "google_service_account" "service_accounts" { + account_id = "alpha-9ee243ed-sa" + disabled = false + email = (known after apply) + id = (known after apply) + member = (known after apply) + name = (known after apply) + project = "ucr-ursa-major-hpc-club" + unique_id = (known after apply) } # module.partition_0.module.slurm_partition.data.google_compute_instance_template.group_template["ghpc"] will be read during apply # (config refers to values not yet known) <= data "google_compute_instance_template" "group_template" { + advanced_machine_features = (known after apply) + can_ip_forward = (known after apply) + confidential_instance_config = (known after apply) + description = (known after apply) + disk = (known after apply) + guest_accelerator = (known after apply) + id = (known after apply) + instance_description = (known after apply) + labels = (known after apply) + machine_type = (known after apply) + metadata = (known after apply) + metadata_fingerprint = (known after apply) + metadata_startup_script = (known after apply) + min_cpu_platform = (known after apply) + name = (known after apply) + name_prefix = (known after apply) + network_interface = (known after apply) + project = "ucr-ursa-major-hpc-club" + region = (known after apply) + reservation_affinity = (known after apply) + resource_policies = (known after apply) + scheduling = (known after apply) + self_link = (known after apply) + service_account = (known after apply) + shielded_instance_config = (known after apply) + tags = (known after apply) + tags_fingerprint = (known after apply) } # module.partition_0.module.slurm_partition.google_compute_project_metadata_item.partition_startup_scripts["ghpc_startup_sh"] will be created + resource "google_compute_project_metadata_item" "partition_startup_scripts" { + id = (known after apply) + key = "alpha9ee24-slurm-partition-batch-script-ghpc_startup_sh" + project = "ucr-ursa-major-hpc-club" } # module.partition_0.module.slurm_partition.null_resource.partition will be created + resource "null_resource" "partition" { + id = (known after apply) + triggers = {} } # module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.cgroup_conf will be created + resource "google_compute_project_metadata_item" "cgroup_conf" { + id = (known after apply) + key = "alpha9ee24-slurm-tpl-cgroup-conf" + project = "ucr-ursa-major-hpc-club" + value = <<-EOT # cgroup.conf # https://slurm.schedmd.com/cgroup.conf.html CgroupAutomount=no #CgroupMountpoint=/sys/fs/cgroup ConstrainCores=yes ConstrainRamSpace=yes ConstrainSwapSpace=no ConstrainDevices=yes EOT } # module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.compute_startup_scripts["ghpc_startup_sh"] will be created + resource "google_compute_project_metadata_item" "compute_startup_scripts" { + id = (known after apply) + key = "alpha9ee24-slurm-compute-script-ghpc_startup_sh" + project = "ucr-ursa-major-hpc-club" + value = <<-EOT #!/bin/bash gsutil cp gs://test-us-central1-storage/clusters/115/bootstrap_compute.sh - | bash EOT } # module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.config will be created + resource "google_compute_project_metadata_item" "config" { + id = (known after apply) + key = "alpha9ee24-slurm-config" + project = "ucr-ursa-major-hpc-club" + value = (known after apply) } # module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.controller_startup_scripts["ghpc_startup_sh"] will be created + resource "google_compute_project_metadata_item" "controller_startup_scripts" { + id = (known after apply) + key = "alpha9ee24-slurm-controller-script-ghpc_startup_sh" + project = "ucr-ursa-major-hpc-club" + value = <<-EOT #!/bin/bash echo "******************************************** CALLING CONTROLLER STARTUP" gsutil cp gs://test-us-central1-storage/clusters/115/bootstrap_controller.sh - | bash EOT } # module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.slurm_conf will be created + resource "google_compute_project_metadata_item" "slurm_conf" { + id = (known after apply) + key = "alpha9ee24-slurm-tpl-slurm-conf" + project = "ucr-ursa-major-hpc-club" + value = <<-EOT # slurm.conf # https://slurm.schedmd.com/slurm.conf.html # https://slurm.schedmd.com/configurator.html ProctrackType=proctrack/cgroup SlurmctldPidFile=/var/run/slurm/slurmctld.pid SlurmdPidFile=/var/run/slurm/slurmd.pid TaskPlugin=task/affinity,task/cgroup # # # SCHEDULING SchedulerType=sched/backfill SelectType=select/cons_tres SelectTypeParameters=CR_Core_Memory # # # LOGGING AND ACCOUNTING JobAcctGatherFrequency=30 JobAcctGatherType=jobacct_gather/cgroup SlurmctldDebug=info SlurmdDebug=info ################################################################################ # vvvvv WARNING: DO NOT MODIFY SECTION BELOW vvvvv # ################################################################################ SlurmctldHost={control_host}({control_addr}) AuthType=auth/munge AuthInfo=cred_expire=120 AuthAltTypes=auth/jwt CredType=cred/munge MpiDefault={mpi_default} ReturnToService=2 SlurmctldPort={control_host_port} SlurmdPort=6818 SlurmdSpoolDir=/var/spool/slurmd SlurmUser=slurm StateSaveLocation={state_save} MaxNodeCount=64000 # # # TIMERS MessageTimeout=60 # # # LOGGING AND ACCOUNTING AccountingStorageType=accounting_storage/slurmdbd AccountingStorageHost={control_host} AccountingStoreFlags=job_comment ClusterName={name} SlurmctldLogFile={slurmlog}/slurmctld.log SlurmdLogFile={slurmlog}/slurmd-%n.log DebugFlags=Power # # # GENERATED CLOUD CONFIGURATIONS include cloud.conf ################################################################################ # ^^^^^ WARNING: DO NOT MODIFY SECTION ABOVE ^^^^^ # ################################################################################ EOT } # module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.slurmdbd_conf will be created + resource "google_compute_project_metadata_item" "slurmdbd_conf" { + id = (known after apply) + key = "alpha9ee24-slurm-tpl-slurmdbd-conf" + project = "ucr-ursa-major-hpc-club" + value = <<-EOT # slurmdbd.conf # https://slurm.schedmd.com/slurmdbd.conf.html DebugLevel=info PidFile=/var/run/slurm/slurmdbd.pid ################################################################################ # vvvvv WARNING: DO NOT MODIFY SECTION BELOW vvvvv # ################################################################################ AuthType=auth/munge AuthAltTypes=auth/jwt AuthAltParameters=jwt_key={state_save}/jwt_hs256.key DbdHost={control_host} LogFile={slurmlog}/slurmdbd.log SlurmUser=slurm StorageLoc={db_name} StorageType=accounting_storage/mysql StorageHost={db_host} StoragePort={db_port} StorageUser={db_user} StoragePass={db_pass} ################################################################################ # ^^^^^ WARNING: DO NOT MODIFY SECTION ABOVE ^^^^^ # ################################################################################ EOT } # module.slurm_controller.module.slurm_controller_instance.random_string.topic_suffix will be created + resource "random_string" "topic_suffix" { + id = (known after apply) + length = 8 + lower = true + min_lower = 0 + min_numeric = 0 + min_special = 0 + min_upper = 0 + number = true + numeric = true + result = (known after apply) + special = false + upper = true } # module.slurm_controller.module.slurm_controller_instance.random_uuid.cluster_id will be created + resource "random_uuid" "cluster_id" { + id = (known after apply) + result = (known after apply) } # module.slurm_login.module.slurm_login_instance.google_compute_project_metadata_item.login_startup_scripts["ghpc_startup_sh"] will be created + resource "google_compute_project_metadata_item" "login_startup_scripts" { + id = (known after apply) + key = (known after apply) + project = "ucr-ursa-major-hpc-club" + value = <<-EOT #!/bin/bash echo "******************************************** CALLING LOGIN STARTUP" gsutil cp gs://test-us-central1-storage/clusters/115/bootstrap_login.sh - | bash EOT } # module.slurm_login.module.slurm_login_instance.random_string.suffix will be created + resource "random_string" "suffix" { + id = (known after apply) + length = 8 + lower = true + min_lower = 0 + min_numeric = 0 + min_special = 0 + min_upper = 0 + number = true + numeric = true + result = (known after apply) + special = false + upper = false } # module.slurm_controller.module.slurm_controller_instance.module.slurm_controller_instance.data.google_compute_instance_template.base will be read during apply # (config refers to values not yet known) <= data "google_compute_instance_template" "base" { + advanced_machine_features = (known after apply) + can_ip_forward = (known after apply) + confidential_instance_config = (known after apply) + description = (known after apply) + disk = (known after apply) + guest_accelerator = (known after apply) + id = (known after apply) + instance_description = (known after apply) + labels = (known after apply) + machine_type = (known after apply) + metadata = (known after apply) + metadata_fingerprint = (known after apply) + metadata_startup_script = (known after apply) + min_cpu_platform = (known after apply) + name = (known after apply) + name_prefix = (known after apply) + network_interface = (known after apply) + project = "ucr-ursa-major-hpc-club" + region = (known after apply) + reservation_affinity = (known after apply) + resource_policies = (known after apply) + scheduling = (known after apply) + self_link = (known after apply) + service_account = (known after apply) + shielded_instance_config = (known after apply) + tags = (known after apply) + tags_fingerprint = (known after apply) } # module.slurm_controller.module.slurm_controller_instance.module.slurm_controller_instance.data.google_compute_zones.available will be read during apply # (depends on a resource or a module with changes pending) <= data "google_compute_zones" "available" { + id = (known after apply) + names = (known after apply) + project = "ucr-ursa-major-hpc-club" + region = "us-central1" } # module.slurm_controller.module.slurm_controller_instance.module.slurm_controller_instance.data.local_file.startup will be read during apply # (depends on a resource or a module with changes pending) <= data "local_file" "startup" { + content = (known after apply) + content_base64 = (known after apply) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + filename = "/opt/gcluster/hpc-toolkit/community/front-end/clusters/cluster_115/alpha-9ee243ed/primary/.terraform/modules/slurm_controller.slurm_controller_instance/scripts/startup.sh" + id = (known after apply) } # module.slurm_controller.module.slurm_controller_instance.module.slurm_controller_instance.google_compute_instance_from_template.slurm_instance[0] will be created + resource "google_compute_instance_from_template" "slurm_instance" { + allow_stopping_for_update = true + attached_disk = (known after apply) + can_ip_forward = (known after apply) + cpu_platform = (known after apply) + current_status = (known after apply) + deletion_protection = (known after apply) + description = (known after apply) + desired_status = (known after apply) + enable_display = (known after apply) + guest_accelerator = (known after apply) + hostname = (known after apply) + id = (known after apply) + instance_id = (known after apply) + label_fingerprint = (known after apply) + labels = (known after apply) + machine_type = (known after apply) + metadata = (known after apply) + metadata_fingerprint = (known after apply) + metadata_startup_script = (known after apply) + min_cpu_platform = (known after apply) + name = "alpha9ee24-controller" + project = "ucr-ursa-major-hpc-club" + resource_policies = (known after apply) + scratch_disk = (known after apply) + self_link = (known after apply) + service_account = (known after apply) + source_instance_template = (known after apply) + tags = (known after apply) + tags_fingerprint = (known after apply) + zone = "us-central1-a" + network_interface { + access_config = (known after apply) + alias_ip_range = (known after apply) + ipv6_access_type = (known after apply) + name = (known after apply) + network = "https://www.googleapis.com/compute/v1/projects/ucr-ursa-major-hpc-club/global/networks/default" + network_ip = (known after apply) + nic_type = (known after apply) + queue_count = (known after apply) + stack_type = (known after apply) + subnetwork = "https://www.googleapis.com/compute/v1/projects/ucr-ursa-major-hpc-club/regions/us-central1/subnetworks/default" + subnetwork_project = (known after apply) } } # module.slurm_controller.module.slurm_controller_template.module.instance_template.google_compute_instance_template.tpl will be created + resource "google_compute_instance_template" "tpl" { + can_ip_forward = false + id = (known after apply) + labels = { + "created_by" = "test-server" + "ghpc_blueprint" = "alpha-9ee243ed" + "ghpc_deployment" = "alpha-9ee243ed" + "ghpc_role" = "scheduler" + "slurm_cluster_name" = "alpha9ee24" + "slurm_instance_role" = "controller" } + machine_type = "n2-standard-2" + metadata = { + "VmDnsSetting" = "GlobalOnly" + "enable-oslogin" = "TRUE" + "slurm_cluster_name" = "alpha9ee24" + "slurm_instance_role" = "controller" } + metadata_fingerprint = (known after apply) + metadata_startup_script = <<-EOT #!/bin/bash # Copyright (C) SchedMD LLC. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. set -e SLURM_DIR=/slurm FLAGFILE=$SLURM_DIR/slurm_configured_do_not_remove SCRIPTS_DIR=$SLURM_DIR/scripts METADATA_SERVER="metadata.google.internal" URL="http://$METADATA_SERVER/computeMetadata/v1" HEADER="Metadata-Flavor:Google" CURL="curl -sS --fail --header $HEADER" function fetch_scripts { # fetch project metadata if ! CLUSTER=$($CURL $URL/instance/attributes/slurm_cluster_name); then echo "ERROR: cluster name not found in instance metadata. Quitting!" return 1 fi if ! META_DEVEL=$($CURL $URL/project/attributes/$CLUSTER-slurm-devel); then echo "WARNING: $CLUSTER-slurm-devel not found in project metadata, skipping script update" return fi echo devel data found in project metadata, looking to update scripts if STARTUP_SCRIPT=$(jq -re '."startup-script"' <<< "$META_DEVEL"); then echo "INFO: updating startup.sh from project metadata" printf '%s' "$STARTUP_SCRIPT" > $STARTUP_SCRIPT_FILE else echo "WARNING: startup-script not found in project metadata, skipping update" fi if SETUP_SCRIPT=$(jq -re '."setup-script"' <<< "$META_DEVEL"); then echo "INFO: updating setup.py from project metadata" printf '%s' "$SETUP_SCRIPT" > $SETUP_SCRIPT_FILE else echo "WARNING: setup-script not found in project metadata, skipping update" fi if UTIL_SCRIPT=$(jq -re '."util-script"' <<< "$META_DEVEL"); then echo "INFO: updating util.py from project metadata" printf '%s' "$UTIL_SCRIPT" > $UTIL_SCRIPT_FILE else echo "WARNING: util-script not found in project metadata, skipping update" fi if RESUME_SCRIPT=$(jq -re '."slurm-resume"' <<< "$META_DEVEL"); then echo "INFO: updating resume.py from project metadata" printf '%s' "$RESUME_SCRIPT" > $RESUME_SCRIPT_FILE else echo "WARNING: slurm-resume not found in project metadata, skipping update" fi if SUSPEND_SCRIPT=$(jq -re '."slurm-suspend"' <<< "$META_DEVEL"); then echo "INFO: updating suspend.py from project metadata" printf '%s' "$SUSPEND_SCRIPT" > $SUSPEND_SCRIPT_FILE else echo "WARNING: slurm-suspend not found in project metadata, skipping update" fi if SLURMSYNC_SCRIPT=$(jq -re '."slurmsync"' <<< "$META_DEVEL"); then echo "INFO: updating slurmsync.py from project metadata" printf '%s' "$SLURMSYNC_SCRIPT" > $SLURMSYNC_SCRIPT_FILE else echo "WARNING: slurmsync not found in project metadata, skipping update" fi if SLURMEVENTD_SCRIPT=$(jq -re '."slurmeventd"' <<< "$META_DEVEL"); then echo "INFO: updating slurmeventd.py from project metadata" printf '%s' "$SLURMEVENTD_SCRIPT" > $SLURMEVENTD_SCRIPT_FILE else echo "WARNING: slurmeventd not found in project metadata, skipping update" fi } PING_METADATA="ping -q -w1 -c1 $METADATA_SERVER" echo "INFO: $PING_METADATA" for i in $(seq 10); do [ $i -gt 1 ] && sleep 5; $PING_METADATA > /dev/null && s=0 && break || s=$?; echo "ERROR: Failed to contact metadata server, will retry" done if [ $s -ne 0 ]; then echo "ERROR: Unable to contact metadata server, aborting" wall -n '*** Slurm setup failed in the startup script! see `journalctl -u google-startup-scripts` ***' exit 1 else echo "INFO: Successfully contacted metadata server" fi GOOGLE_DNS=8.8.8.8 PING_GOOGLE="ping -q -w1 -c1 $GOOGLE_DNS" echo "INFO: $PING_GOOGLE" for i in $(seq 5); do [ $i -gt 1 ] && sleep 2; $PING_GOOGLE > /dev/null && s=0 && break || s=$?; echo "failed to ping Google DNS, will retry" done if [ $s -ne 0 ]; then echo "WARNING: No internet access detected" else echo "INFO: Internet access detected" fi mkdir -p $SCRIPTS_DIR STARTUP_SCRIPT_FILE=$SCRIPTS_DIR/startup.sh SETUP_SCRIPT_FILE=$SCRIPTS_DIR/setup.py UTIL_SCRIPT_FILE=$SCRIPTS_DIR/util.py RESUME_SCRIPT_FILE=$SCRIPTS_DIR/resume.py SUSPEND_SCRIPT_FILE=$SCRIPTS_DIR/suspend.py SLURMSYNC_SCRIPT_FILE=$SCRIPTS_DIR/slurmsync.py SLURMEVENTD_SCRIPT_FILE=$SCRIPTS_DIR/slurmeventd.py fetch_scripts if [ -f $FLAGFILE ]; then echo "WARNING: Slurm was previously configured, quitting" exit 0 fi touch $FLAGFILE echo "INFO: Running python cluster setup script" chmod +x $SETUP_SCRIPT_FILE python3 $SCRIPTS_DIR/util.py exec $SETUP_SCRIPT_FILE EOT + name = (known after apply) + name_prefix = "alpha9ee24-controller-default-" + project = "ucr-ursa-major-hpc-club" + region = "us-central1" + self_link = (known after apply) + tags = [ + "alpha9ee24", ] + tags_fingerprint = (known after apply) + advanced_machine_features { + enable_nested_virtualization = false + threads_per_core = 1 } + confidential_instance_config { + enable_confidential_compute = false } + disk { + auto_delete = true + boot = true + device_name = (known after apply) + disk_size_gb = 50 + disk_type = "pd-standard" + interface = (known after apply) + labels = { + "created_by" = "test-server" + "ghpc_blueprint" = "alpha-9ee243ed" + "ghpc_deployment" = "alpha-9ee243ed" + "ghpc_role" = "scheduler" + "slurm_cluster_name" = "alpha9ee24" + "slurm_instance_role" = "controller" } + mode = (known after apply) + source_image = "projects/schedmd-slurm-public/global/images/family/schedmd-v5-slurm-22-05-8-hpc-centos-7" + type = "PERSISTENT" } + network_interface { + ipv6_access_type = (known after apply) + name = (known after apply) + network = "https://www.googleapis.com/compute/v1/projects/ucr-ursa-major-hpc-club/global/networks/default" + stack_type = (known after apply) + subnetwork = "https://www.googleapis.com/compute/v1/projects/ucr-ursa-major-hpc-club/regions/us-central1/subnetworks/default" + subnetwork_project = (known after apply) } + scheduling { + automatic_restart = true + on_host_maintenance = "MIGRATE" + preemptible = false + provisioning_model = (known after apply) } + service_account { + email = (known after apply) + scopes = [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.read_write", + "https://www.googleapis.com/auth/logging.write", + "https://www.googleapis.com/auth/monitoring.write", + "https://www.googleapis.com/auth/pubsub", ] } } # module.slurm_login.module.slurm_login_instance.module.slurm_login_instance.data.google_compute_instance_template.base will be read during apply # (config refers to values not yet known) <= data "google_compute_instance_template" "base" { + advanced_machine_features = (known after apply) + can_ip_forward = (known after apply) + confidential_instance_config = (known after apply) + description = (known after apply) + disk = (known after apply) + guest_accelerator = (known after apply) + id = (known after apply) + instance_description = (known after apply) + labels = (known after apply) + machine_type = (known after apply) + metadata = (known after apply) + metadata_fingerprint = (known after apply) + metadata_startup_script = (known after apply) + min_cpu_platform = (known after apply) + name = (known after apply) + name_prefix = (known after apply) + network_interface = (known after apply) + project = "ucr-ursa-major-hpc-club" + region = (known after apply) + reservation_affinity = (known after apply) + resource_policies = (known after apply) + scheduling = (known after apply) + self_link = (known after apply) + service_account = (known after apply) + shielded_instance_config = (known after apply) + tags = (known after apply) + tags_fingerprint = (known after apply) } # module.slurm_login.module.slurm_login_instance.module.slurm_login_instance.data.google_compute_zones.available will be read during apply # (depends on a resource or a module with changes pending) <= data "google_compute_zones" "available" { + id = (known after apply) + names = (known after apply) + project = "ucr-ursa-major-hpc-club" + region = "us-central1" } # module.slurm_login.module.slurm_login_instance.module.slurm_login_instance.data.local_file.startup will be read during apply # (depends on a resource or a module with changes pending) <= data "local_file" "startup" { + content = (known after apply) + content_base64 = (known after apply) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + filename = "/opt/gcluster/hpc-toolkit/community/front-end/clusters/cluster_115/alpha-9ee243ed/primary/.terraform/modules/slurm_login.slurm_login_instance/scripts/startup.sh" + id = (known after apply) } # module.slurm_login.module.slurm_login_instance.module.slurm_login_instance.google_compute_instance_from_template.slurm_instance[0] will be created + resource "google_compute_instance_from_template" "slurm_instance" { + allow_stopping_for_update = true + attached_disk = (known after apply) + can_ip_forward = (known after apply) + cpu_platform = (known after apply) + current_status = (known after apply) + deletion_protection = (known after apply) + description = (known after apply) + desired_status = (known after apply) + enable_display = (known after apply) + guest_accelerator = (known after apply) + hostname = (known after apply) + id = (known after apply) + instance_id = (known after apply) + label_fingerprint = (known after apply) + labels = (known after apply) + machine_type = (known after apply) + metadata = (known after apply) + metadata_fingerprint = (known after apply) + metadata_startup_script = (known after apply) + min_cpu_platform = (known after apply) + name = (known after apply) + project = "ucr-ursa-major-hpc-club" + resource_policies = (known after apply) + scratch_disk = (known after apply) + self_link = (known after apply) + service_account = (known after apply) + source_instance_template = (known after apply) + tags = (known after apply) + tags_fingerprint = (known after apply) + zone = "us-central1-a" + network_interface { + access_config = (known after apply) + alias_ip_range = (known after apply) + ipv6_access_type = (known after apply) + name = (known after apply) + network = "https://www.googleapis.com/compute/v1/projects/ucr-ursa-major-hpc-club/global/networks/default" + network_ip = (known after apply) + nic_type = (known after apply) + queue_count = (known after apply) + stack_type = (known after apply) + subnetwork = "default" + subnetwork_project = (known after apply) } } # module.slurm_login.module.slurm_login_template.module.instance_template.google_compute_instance_template.tpl will be created + resource "google_compute_instance_template" "tpl" { + can_ip_forward = false + id = (known after apply) + labels = { + "created_by" = "test-server" + "ghpc_blueprint" = "alpha-9ee243ed" + "ghpc_deployment" = "alpha-9ee243ed" + "ghpc_role" = "scheduler" + "slurm_cluster_name" = "alpha9ee24" + "slurm_instance_role" = "login" } + machine_type = "n2-standard-2" + metadata = { + "VmDnsSetting" = "GlobalOnly" + "enable-oslogin" = "TRUE" + "slurm_cluster_name" = "alpha9ee24" + "slurm_instance_role" = "login" } + metadata_fingerprint = (known after apply) + metadata_startup_script = <<-EOT #!/bin/bash # Copyright (C) SchedMD LLC. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. set -e SLURM_DIR=/slurm FLAGFILE=$SLURM_DIR/slurm_configured_do_not_remove SCRIPTS_DIR=$SLURM_DIR/scripts METADATA_SERVER="metadata.google.internal" URL="http://$METADATA_SERVER/computeMetadata/v1" HEADER="Metadata-Flavor:Google" CURL="curl -sS --fail --header $HEADER" function fetch_scripts { # fetch project metadata if ! CLUSTER=$($CURL $URL/instance/attributes/slurm_cluster_name); then echo "ERROR: cluster name not found in instance metadata. Quitting!" return 1 fi if ! META_DEVEL=$($CURL $URL/project/attributes/$CLUSTER-slurm-devel); then echo "WARNING: $CLUSTER-slurm-devel not found in project metadata, skipping script update" return fi echo devel data found in project metadata, looking to update scripts if STARTUP_SCRIPT=$(jq -re '."startup-script"' <<< "$META_DEVEL"); then echo "INFO: updating startup.sh from project metadata" printf '%s' "$STARTUP_SCRIPT" > $STARTUP_SCRIPT_FILE else echo "WARNING: startup-script not found in project metadata, skipping update" fi if SETUP_SCRIPT=$(jq -re '."setup-script"' <<< "$META_DEVEL"); then echo "INFO: updating setup.py from project metadata" printf '%s' "$SETUP_SCRIPT" > $SETUP_SCRIPT_FILE else echo "WARNING: setup-script not found in project metadata, skipping update" fi if UTIL_SCRIPT=$(jq -re '."util-script"' <<< "$META_DEVEL"); then echo "INFO: updating util.py from project metadata" printf '%s' "$UTIL_SCRIPT" > $UTIL_SCRIPT_FILE else echo "WARNING: util-script not found in project metadata, skipping update" fi if RESUME_SCRIPT=$(jq -re '."slurm-resume"' <<< "$META_DEVEL"); then echo "INFO: updating resume.py from project metadata" printf '%s' "$RESUME_SCRIPT" > $RESUME_SCRIPT_FILE else echo "WARNING: slurm-resume not found in project metadata, skipping update" fi if SUSPEND_SCRIPT=$(jq -re '."slurm-suspend"' <<< "$META_DEVEL"); then echo "INFO: updating suspend.py from project metadata" printf '%s' "$SUSPEND_SCRIPT" > $SUSPEND_SCRIPT_FILE else echo "WARNING: slurm-suspend not found in project metadata, skipping update" fi if SLURMSYNC_SCRIPT=$(jq -re '."slurmsync"' <<< "$META_DEVEL"); then echo "INFO: updating slurmsync.py from project metadata" printf '%s' "$SLURMSYNC_SCRIPT" > $SLURMSYNC_SCRIPT_FILE else echo "WARNING: slurmsync not found in project metadata, skipping update" fi if SLURMEVENTD_SCRIPT=$(jq -re '."slurmeventd"' <<< "$META_DEVEL"); then echo "INFO: updating slurmeventd.py from project metadata" printf '%s' "$SLURMEVENTD_SCRIPT" > $SLURMEVENTD_SCRIPT_FILE else echo "WARNING: slurmeventd not found in project metadata, skipping update" fi } PING_METADATA="ping -q -w1 -c1 $METADATA_SERVER" echo "INFO: $PING_METADATA" for i in $(seq 10); do [ $i -gt 1 ] && sleep 5; $PING_METADATA > /dev/null && s=0 && break || s=$?; echo "ERROR: Failed to contact metadata server, will retry" done if [ $s -ne 0 ]; then echo "ERROR: Unable to contact metadata server, aborting" wall -n '*** Slurm setup failed in the startup script! see `journalctl -u google-startup-scripts` ***' exit 1 else echo "INFO: Successfully contacted metadata server" fi GOOGLE_DNS=8.8.8.8 PING_GOOGLE="ping -q -w1 -c1 $GOOGLE_DNS" echo "INFO: $PING_GOOGLE" for i in $(seq 5); do [ $i -gt 1 ] && sleep 2; $PING_GOOGLE > /dev/null && s=0 && break || s=$?; echo "failed to ping Google DNS, will retry" done if [ $s -ne 0 ]; then echo "WARNING: No internet access detected" else echo "INFO: Internet access detected" fi mkdir -p $SCRIPTS_DIR STARTUP_SCRIPT_FILE=$SCRIPTS_DIR/startup.sh SETUP_SCRIPT_FILE=$SCRIPTS_DIR/setup.py UTIL_SCRIPT_FILE=$SCRIPTS_DIR/util.py RESUME_SCRIPT_FILE=$SCRIPTS_DIR/resume.py SUSPEND_SCRIPT_FILE=$SCRIPTS_DIR/suspend.py SLURMSYNC_SCRIPT_FILE=$SCRIPTS_DIR/slurmsync.py SLURMEVENTD_SCRIPT_FILE=$SCRIPTS_DIR/slurmeventd.py fetch_scripts if [ -f $FLAGFILE ]; then echo "WARNING: Slurm was previously configured, quitting" exit 0 fi touch $FLAGFILE echo "INFO: Running python cluster setup script" chmod +x $SETUP_SCRIPT_FILE python3 $SCRIPTS_DIR/util.py exec $SETUP_SCRIPT_FILE EOT + name = (known after apply) + name_prefix = "alpha9ee24-login-default-" + project = "ucr-ursa-major-hpc-club" + region = "us-central1" + self_link = (known after apply) + tags = [ + "alpha9ee24", ] + tags_fingerprint = (known after apply) + advanced_machine_features { + enable_nested_virtualization = false + threads_per_core = 1 } + confidential_instance_config { + enable_confidential_compute = false } + disk { + auto_delete = true + boot = true + device_name = (known after apply) + disk_size_gb = 50 + disk_type = "pd-standard" + interface = (known after apply) + labels = { + "created_by" = "test-server" + "ghpc_blueprint" = "alpha-9ee243ed" + "ghpc_deployment" = "alpha-9ee243ed" + "ghpc_role" = "scheduler" + "slurm_cluster_name" = "alpha9ee24" + "slurm_instance_role" = "login" } + mode = (known after apply) + source_image = "projects/schedmd-slurm-public/global/images/family/schedmd-v5-slurm-22-05-8-hpc-centos-7" + type = "PERSISTENT" } + network_interface { + ipv6_access_type = (known after apply) + name = (known after apply) + network = "https://www.googleapis.com/compute/v1/projects/ucr-ursa-major-hpc-club/global/networks/default" + stack_type = (known after apply) + subnetwork = "default" + subnetwork_project = (known after apply) } + scheduling { + automatic_restart = true + on_host_maintenance = "MIGRATE" + preemptible = false + provisioning_model = (known after apply) } + service_account { + email = (known after apply) + scopes = [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.read_write", + "https://www.googleapis.com/auth/logging.write", + "https://www.googleapis.com/auth/monitoring.write", ] } } # module.partition_0.module.slurm_partition.module.slurm_compute_template["ghpc"].module.instance_template.google_compute_instance_template.tpl will be created + resource "google_compute_instance_template" "tpl" { + can_ip_forward = false + id = (known after apply) + labels = { + "created_by" = "test-server" + "ghpc_blueprint" = "alpha-9ee243ed" + "ghpc_deployment" = "alpha-9ee243ed" + "ghpc_role" = "compute" + "slurm_cluster_name" = "alpha9ee24" + "slurm_instance_role" = "compute" } + machine_type = "c2-standard-60" + metadata = { + "VmDnsSetting" = "GlobalOnly" + "enable-oslogin" = "TRUE" + "slurm_cluster_name" = "alpha9ee24" + "slurm_instance_role" = "compute" } + metadata_fingerprint = (known after apply) + metadata_startup_script = <<-EOT #!/bin/bash # Copyright (C) SchedMD LLC. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. set -e SLURM_DIR=/slurm FLAGFILE=$SLURM_DIR/slurm_configured_do_not_remove SCRIPTS_DIR=$SLURM_DIR/scripts METADATA_SERVER="metadata.google.internal" URL="http://$METADATA_SERVER/computeMetadata/v1" HEADER="Metadata-Flavor:Google" CURL="curl -sS --fail --header $HEADER" function fetch_scripts { # fetch project metadata if ! CLUSTER=$($CURL $URL/instance/attributes/slurm_cluster_name); then echo "ERROR: cluster name not found in instance metadata. Quitting!" return 1 fi if ! META_DEVEL=$($CURL $URL/project/attributes/$CLUSTER-slurm-devel); then echo "WARNING: $CLUSTER-slurm-devel not found in project metadata, skipping script update" return fi echo devel data found in project metadata, looking to update scripts if STARTUP_SCRIPT=$(jq -re '."startup-script"' <<< "$META_DEVEL"); then echo "INFO: updating startup.sh from project metadata" printf '%s' "$STARTUP_SCRIPT" > $STARTUP_SCRIPT_FILE else echo "WARNING: startup-script not found in project metadata, skipping update" fi if SETUP_SCRIPT=$(jq -re '."setup-script"' <<< "$META_DEVEL"); then echo "INFO: updating setup.py from project metadata" printf '%s' "$SETUP_SCRIPT" > $SETUP_SCRIPT_FILE else echo "WARNING: setup-script not found in project metadata, skipping update" fi if UTIL_SCRIPT=$(jq -re '."util-script"' <<< "$META_DEVEL"); then echo "INFO: updating util.py from project metadata" printf '%s' "$UTIL_SCRIPT" > $UTIL_SCRIPT_FILE else echo "WARNING: util-script not found in project metadata, skipping update" fi if RESUME_SCRIPT=$(jq -re '."slurm-resume"' <<< "$META_DEVEL"); then echo "INFO: updating resume.py from project metadata" printf '%s' "$RESUME_SCRIPT" > $RESUME_SCRIPT_FILE else echo "WARNING: slurm-resume not found in project metadata, skipping update" fi if SUSPEND_SCRIPT=$(jq -re '."slurm-suspend"' <<< "$META_DEVEL"); then echo "INFO: updating suspend.py from project metadata" printf '%s' "$SUSPEND_SCRIPT" > $SUSPEND_SCRIPT_FILE else echo "WARNING: slurm-suspend not found in project metadata, skipping update" fi if SLURMSYNC_SCRIPT=$(jq -re '."slurmsync"' <<< "$META_DEVEL"); then echo "INFO: updating slurmsync.py from project metadata" printf '%s' "$SLURMSYNC_SCRIPT" > $SLURMSYNC_SCRIPT_FILE else echo "WARNING: slurmsync not found in project metadata, skipping update" fi if SLURMEVENTD_SCRIPT=$(jq -re '."slurmeventd"' <<< "$META_DEVEL"); then echo "INFO: updating slurmeventd.py from project metadata" printf '%s' "$SLURMEVENTD_SCRIPT" > $SLURMEVENTD_SCRIPT_FILE else echo "WARNING: slurmeventd not found in project metadata, skipping update" fi } PING_METADATA="ping -q -w1 -c1 $METADATA_SERVER" echo "INFO: $PING_METADATA" for i in $(seq 10); do [ $i -gt 1 ] && sleep 5; $PING_METADATA > /dev/null && s=0 && break || s=$?; echo "ERROR: Failed to contact metadata server, will retry" done if [ $s -ne 0 ]; then echo "ERROR: Unable to contact metadata server, aborting" wall -n '*** Slurm setup failed in the startup script! see `journalctl -u google-startup-scripts` ***' exit 1 else echo "INFO: Successfully contacted metadata server" fi GOOGLE_DNS=8.8.8.8 PING_GOOGLE="ping -q -w1 -c1 $GOOGLE_DNS" echo "INFO: $PING_GOOGLE" for i in $(seq 5); do [ $i -gt 1 ] && sleep 2; $PING_GOOGLE > /dev/null && s=0 && break || s=$?; echo "failed to ping Google DNS, will retry" done if [ $s -ne 0 ]; then echo "WARNING: No internet access detected" else echo "INFO: Internet access detected" fi mkdir -p $SCRIPTS_DIR STARTUP_SCRIPT_FILE=$SCRIPTS_DIR/startup.sh SETUP_SCRIPT_FILE=$SCRIPTS_DIR/setup.py UTIL_SCRIPT_FILE=$SCRIPTS_DIR/util.py RESUME_SCRIPT_FILE=$SCRIPTS_DIR/resume.py SUSPEND_SCRIPT_FILE=$SCRIPTS_DIR/suspend.py SLURMSYNC_SCRIPT_FILE=$SCRIPTS_DIR/slurmsync.py SLURMEVENTD_SCRIPT_FILE=$SCRIPTS_DIR/slurmeventd.py fetch_scripts if [ -f $FLAGFILE ]; then echo "WARNING: Slurm was previously configured, quitting" exit 0 fi touch $FLAGFILE echo "INFO: Running python cluster setup script" chmod +x $SETUP_SCRIPT_FILE python3 $SCRIPTS_DIR/util.py exec $SETUP_SCRIPT_FILE EOT + name = (known after apply) + name_prefix = "alpha9ee24-compute-batch-ghpc-" + project = "ucr-ursa-major-hpc-club" + region = (known after apply) + self_link = (known after apply) + tags = [ + "alpha9ee24", ] + tags_fingerprint = (known after apply) + advanced_machine_features { + enable_nested_virtualization = false + threads_per_core = 1 } + confidential_instance_config { + enable_confidential_compute = false } + disk { + auto_delete = true + boot = true + device_name = (known after apply) + disk_size_gb = 50 + disk_type = "pd-standard" + interface = (known after apply) + labels = { + "created_by" = "test-server" + "ghpc_blueprint" = "alpha-9ee243ed" + "ghpc_deployment" = "alpha-9ee243ed" + "ghpc_role" = "compute" + "slurm_cluster_name" = "alpha9ee24" + "slurm_instance_role" = "compute" } + mode = (known after apply) + source_image = "projects/schedmd-slurm-public/global/images/family/schedmd-v5-slurm-22-05-8-hpc-centos-7" + type = "PERSISTENT" } + network_interface { + ipv6_access_type = (known after apply) + name = (known after apply) + network = (known after apply) + stack_type = (known after apply) + subnetwork = "https://www.googleapis.com/compute/v1/projects/ucr-ursa-major-hpc-club/regions/us-central1/subnetworks/default" + subnetwork_project = (known after apply) } + scheduling { + automatic_restart = true + on_host_maintenance = "TERMINATE" + preemptible = false + provisioning_model = (known after apply) } + service_account { + email = "5206832155-compute@developer.gserviceaccount.com" + scopes = [ + "https://www.googleapis.com/auth/cloud-platform", ] } } Plan: 29 to add, 0 to change, 0 to destroy. module.slurm_login.module.slurm_login_instance.random_string.suffix: Creating... module.slurm_controller.module.slurm_controller_instance.random_string.topic_suffix: Creating... module.slurm_controller.module.slurm_controller_instance.random_string.topic_suffix: Creation complete after 0s [id=tpJOjFfM] module.slurm_controller.module.slurm_controller_instance.random_uuid.cluster_id: Creating... module.slurm_login.module.slurm_login_instance.random_string.suffix: Creation complete after 1s [id=4nx80qw2] module.slurm_controller.module.slurm_controller_instance.random_uuid.cluster_id: Creation complete after 1s [id=3313b0fb-35f0-fb08-fff3-6fe5baf0df64] module.partition_0.module.slurm_partition.google_compute_project_metadata_item.partition_startup_scripts["ghpc_startup_sh"]: Creating... module.hpc_service_account.module.service_accounts.google_service_account.service_accounts["alpha-9ee243ed-sa"]: Creating... module.slurm_login.module.slurm_login_instance.google_compute_project_metadata_item.login_startup_scripts["ghpc_startup_sh"]: Creating... module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.slurm_conf: Creating... module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.controller_startup_scripts["ghpc_startup_sh"]: Creating... module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.slurmdbd_conf: Creating... module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.cgroup_conf: Creating... module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.compute_startup_scripts["ghpc_startup_sh"]: Creating... module.partition_0.module.slurm_partition.module.slurm_compute_template["ghpc"].module.instance_template.google_compute_instance_template.tpl: Creating... module.hpc_service_account.module.service_accounts.google_service_account.service_accounts["alpha-9ee243ed-sa"]: Creation complete after 0s [id=projects/ucr-ursa-major-hpc-club/serviceAccounts/alpha-9ee243ed-sa@ucr-ursa-major-hpc-club.iam.gserviceaccount.com] module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/iam.serviceAccountUser"]: Creating... module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/pubsub.publisher"]: Creating... module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/iam.serviceAccountUser"]: Creation complete after 7s [id=ucr-ursa-major-hpc-club/roles/iam.serviceAccountUser/serviceAccount:alpha-9ee243ed-sa@ucr-ursa-major-hpc-club.iam.gserviceaccount.com] module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/pubsub.subscriber"]: Creating... module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/pubsub.publisher"]: Creation complete after 7s [id=ucr-ursa-major-hpc-club/roles/pubsub.publisher/serviceAccount:alpha-9ee243ed-sa@ucr-ursa-major-hpc-club.iam.gserviceaccount.com] module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/logging.logWriter"]: Creating... module.partition_0.module.slurm_partition.google_compute_project_metadata_item.partition_startup_scripts["ghpc_startup_sh"]: Still creating... [10s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.slurm_conf: Still creating... [10s elapsed] module.slurm_login.module.slurm_login_instance.google_compute_project_metadata_item.login_startup_scripts["ghpc_startup_sh"]: Still creating... [10s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.controller_startup_scripts["ghpc_startup_sh"]: Still creating... [10s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.slurmdbd_conf: Still creating... [10s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.cgroup_conf: Still creating... [10s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.compute_startup_scripts["ghpc_startup_sh"]: Still creating... [10s elapsed] module.partition_0.module.slurm_partition.module.slurm_compute_template["ghpc"].module.instance_template.google_compute_instance_template.tpl: Still creating... [10s elapsed] module.partition_0.module.slurm_partition.module.slurm_compute_template["ghpc"].module.instance_template.google_compute_instance_template.tpl: Creation complete after 11s [id=projects/ucr-ursa-major-hpc-club/global/instanceTemplates/alpha9ee24-compute-batch-ghpc-20230417223619831700000001] module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/compute.instanceAdmin.v1"]: Creating... module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/logging.logWriter"]: Creation complete after 7s [id=ucr-ursa-major-hpc-club/roles/logging.logWriter/serviceAccount:alpha-9ee243ed-sa@ucr-ursa-major-hpc-club.iam.gserviceaccount.com] module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/compute.securityAdmin"]: Creating... module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/pubsub.subscriber"]: Creation complete after 7s [id=ucr-ursa-major-hpc-club/roles/pubsub.subscriber/serviceAccount:alpha-9ee243ed-sa@ucr-ursa-major-hpc-club.iam.gserviceaccount.com] module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/iam.serviceAccountAdmin"]: Creating... module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/iam.serviceAccountAdmin"]: Creation complete after 4s [id=ucr-ursa-major-hpc-club/roles/iam.serviceAccountAdmin/serviceAccount:alpha-9ee243ed-sa@ucr-ursa-major-hpc-club.iam.gserviceaccount.com] module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/storage.objectAdmin"]: Creating... module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/compute.instanceAdmin.v1"]: Creation complete after 7s [id=ucr-ursa-major-hpc-club/roles/compute.instanceAdmin.v1/serviceAccount:alpha-9ee243ed-sa@ucr-ursa-major-hpc-club.iam.gserviceaccount.com] module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/compute.networkAdmin"]: Creating... module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/compute.securityAdmin"]: Creation complete after 4s [id=ucr-ursa-major-hpc-club/roles/compute.securityAdmin/serviceAccount:alpha-9ee243ed-sa@ucr-ursa-major-hpc-club.iam.gserviceaccount.com] module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/resourcemanager.projectIamAdmin"]: Creating... module.partition_0.module.slurm_partition.google_compute_project_metadata_item.partition_startup_scripts["ghpc_startup_sh"]: Still creating... [20s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.controller_startup_scripts["ghpc_startup_sh"]: Still creating... [20s elapsed] module.slurm_login.module.slurm_login_instance.google_compute_project_metadata_item.login_startup_scripts["ghpc_startup_sh"]: Still creating... [20s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.slurm_conf: Still creating... [20s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.cgroup_conf: Still creating... [20s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.slurmdbd_conf: Still creating... [20s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.compute_startup_scripts["ghpc_startup_sh"]: Still creating... [20s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.slurm_conf: Creation complete after 21s [id=alpha9ee24-slurm-tpl-slurm-conf] module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/monitoring.metricWriter"]: Creating... module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/monitoring.metricWriter"]: Creation complete after 4s [id=ucr-ursa-major-hpc-club/roles/monitoring.metricWriter/serviceAccount:alpha-9ee243ed-sa@ucr-ursa-major-hpc-club.iam.gserviceaccount.com] module.partition_0.module.slurm_partition.data.google_compute_instance_template.group_template["ghpc"]: Reading... module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/compute.networkAdmin"]: Creation complete after 7s [id=ucr-ursa-major-hpc-club/roles/compute.networkAdmin/serviceAccount:alpha-9ee243ed-sa@ucr-ursa-major-hpc-club.iam.gserviceaccount.com] module.slurm_controller.module.slurm_controller_template.module.instance_template.google_compute_instance_template.tpl: Creating... module.partition_0.module.slurm_partition.data.google_compute_instance_template.group_template["ghpc"]: Read complete after 0s [id=projects/ucr-ursa-major-hpc-club/global/instanceTemplates/https://www.googleapis.com/compute/v1/projects/ucr-ursa-major-hpc-club/global/instanceTemplates/alpha9ee24-compute-batch-ghpc-20230417223619831700000001] module.slurm_login.module.slurm_login_template.module.instance_template.google_compute_instance_template.tpl: Creating... module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/storage.objectAdmin"]: Creation complete after 7s [id=ucr-ursa-major-hpc-club/roles/storage.objectAdmin/serviceAccount:alpha-9ee243ed-sa@ucr-ursa-major-hpc-club.iam.gserviceaccount.com] module.partition_0.module.slurm_partition.null_resource.partition: Creating... module.partition_0.module.slurm_partition.null_resource.partition: Creation complete after 0s [id=3533045599850081165] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.config: Creating... module.hpc_service_account.module.service_accounts.google_project_iam_member.project-roles["alpha-9ee243ed-sa-ucr-ursa-major-hpc-club=>roles/resourcemanager.projectIamAdmin"]: Creation complete after 7s [id=ucr-ursa-major-hpc-club/roles/resourcemanager.projectIamAdmin/serviceAccount:alpha-9ee243ed-sa@ucr-ursa-major-hpc-club.iam.gserviceaccount.com] module.partition_0.module.slurm_partition.google_compute_project_metadata_item.partition_startup_scripts["ghpc_startup_sh"]: Still creating... [30s elapsed] module.slurm_login.module.slurm_login_instance.google_compute_project_metadata_item.login_startup_scripts["ghpc_startup_sh"]: Still creating... [30s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.controller_startup_scripts["ghpc_startup_sh"]: Still creating... [30s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.slurmdbd_conf: Still creating... [30s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.cgroup_conf: Still creating... [30s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.compute_startup_scripts["ghpc_startup_sh"]: Still creating... [30s elapsed] module.slurm_controller.module.slurm_controller_template.module.instance_template.google_compute_instance_template.tpl: Still creating... [10s elapsed] module.slurm_login.module.slurm_login_template.module.instance_template.google_compute_instance_template.tpl: Still creating... [10s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.config: Still creating... [10s elapsed] module.slurm_controller.module.slurm_controller_template.module.instance_template.google_compute_instance_template.tpl: Creation complete after 11s [id=projects/ucr-ursa-major-hpc-club/global/instanceTemplates/alpha9ee24-controller-default-20230417223644858200000002] module.slurm_login.module.slurm_login_template.module.instance_template.google_compute_instance_template.tpl: Creation complete after 11s [id=projects/ucr-ursa-major-hpc-club/global/instanceTemplates/alpha9ee24-login-default-20230417223644892600000003] module.partition_0.module.slurm_partition.google_compute_project_metadata_item.partition_startup_scripts["ghpc_startup_sh"]: Still creating... [40s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.controller_startup_scripts["ghpc_startup_sh"]: Still creating... [40s elapsed] module.slurm_login.module.slurm_login_instance.google_compute_project_metadata_item.login_startup_scripts["ghpc_startup_sh"]: Still creating... [40s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.slurmdbd_conf: Still creating... [40s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.cgroup_conf: Still creating... [40s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.compute_startup_scripts["ghpc_startup_sh"]: Still creating... [40s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.controller_startup_scripts["ghpc_startup_sh"]: Creation complete after 42s [id=alpha9ee24-slurm-controller-script-ghpc_startup_sh] module.slurm_controller.module.slurm_controller_instance.module.slurm_controller_instance.data.google_compute_zones.available: Reading... module.slurm_controller.module.slurm_controller_instance.module.slurm_controller_instance.data.google_compute_instance_template.base: Reading... module.slurm_controller.module.slurm_controller_instance.module.slurm_controller_instance.data.local_file.startup: Reading... module.slurm_controller.module.slurm_controller_instance.module.slurm_controller_instance.data.local_file.startup: Read complete after 0s [id=de68d872e4df054209706dbeee9bfec9dca89970] module.slurm_controller.module.slurm_controller_instance.module.slurm_controller_instance.data.google_compute_instance_template.base: Read complete after 0s [id=projects/ucr-ursa-major-hpc-club/global/instanceTemplates/https://www.googleapis.com/compute/v1/projects/ucr-ursa-major-hpc-club/global/instanceTemplates/alpha9ee24-controller-default-20230417223644858200000002] module.slurm_controller.module.slurm_controller_instance.module.slurm_controller_instance.data.google_compute_zones.available: Read complete after 0s [id=projects/ucr-ursa-major-hpc-club/regions/us-central1] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.config: Still creating... [20s elapsed] module.partition_0.module.slurm_partition.google_compute_project_metadata_item.partition_startup_scripts["ghpc_startup_sh"]: Still creating... [50s elapsed] module.slurm_login.module.slurm_login_instance.google_compute_project_metadata_item.login_startup_scripts["ghpc_startup_sh"]: Still creating... [50s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.cgroup_conf: Still creating... [50s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.slurmdbd_conf: Still creating... [50s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.compute_startup_scripts["ghpc_startup_sh"]: Still creating... [50s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.config: Still creating... [30s elapsed] module.partition_0.module.slurm_partition.google_compute_project_metadata_item.partition_startup_scripts["ghpc_startup_sh"]: Still creating... [1m0s elapsed] module.slurm_login.module.slurm_login_instance.google_compute_project_metadata_item.login_startup_scripts["ghpc_startup_sh"]: Still creating... [1m0s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.slurmdbd_conf: Still creating... [1m0s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.cgroup_conf: Still creating... [1m0s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.compute_startup_scripts["ghpc_startup_sh"]: Still creating... [1m0s elapsed] module.partition_0.module.slurm_partition.google_compute_project_metadata_item.partition_startup_scripts["ghpc_startup_sh"]: Creation complete after 1m3s [id=alpha9ee24-slurm-partition-batch-script-ghpc_startup_sh] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.config: Still creating... [40s elapsed] module.slurm_login.module.slurm_login_instance.google_compute_project_metadata_item.login_startup_scripts["ghpc_startup_sh"]: Still creating... [1m10s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.cgroup_conf: Still creating... [1m10s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.slurmdbd_conf: Still creating... [1m10s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.compute_startup_scripts["ghpc_startup_sh"]: Still creating... [1m10s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.slurmdbd_conf: Creation complete after 1m14s [id=alpha9ee24-slurm-tpl-slurmdbd-conf] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.config: Still creating... [50s elapsed] module.slurm_login.module.slurm_login_instance.google_compute_project_metadata_item.login_startup_scripts["ghpc_startup_sh"]: Still creating... [1m20s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.cgroup_conf: Still creating... [1m20s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.compute_startup_scripts["ghpc_startup_sh"]: Still creating... [1m20s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.config: Still creating... [1m0s elapsed] module.slurm_login.module.slurm_login_instance.google_compute_project_metadata_item.login_startup_scripts["ghpc_startup_sh"]: Still creating... [1m30s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.cgroup_conf: Still creating... [1m30s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.compute_startup_scripts["ghpc_startup_sh"]: Still creating... [1m30s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.cgroup_conf: Creation complete after 1m35s [id=alpha9ee24-slurm-tpl-cgroup-conf] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.config: Still creating... [1m10s elapsed] module.slurm_login.module.slurm_login_instance.google_compute_project_metadata_item.login_startup_scripts["ghpc_startup_sh"]: Still creating... [1m40s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.compute_startup_scripts["ghpc_startup_sh"]: Still creating... [1m40s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.config: Still creating... [1m20s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.compute_startup_scripts["ghpc_startup_sh"]: Creation complete after 1m45s [id=alpha9ee24-slurm-compute-script-ghpc_startup_sh] module.slurm_login.module.slurm_login_instance.google_compute_project_metadata_item.login_startup_scripts["ghpc_startup_sh"]: Still creating... [1m50s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.config: Still creating... [1m30s elapsed] module.slurm_login.module.slurm_login_instance.google_compute_project_metadata_item.login_startup_scripts["ghpc_startup_sh"]: Still creating... [2m0s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.config: Still creating... [1m40s elapsed] module.slurm_login.module.slurm_login_instance.google_compute_project_metadata_item.login_startup_scripts["ghpc_startup_sh"]: Creation complete after 2m6s [id=alpha9ee24-slurm-login_4nx80qw2-script-ghpc_startup_sh] module.slurm_login.module.slurm_login_instance.module.slurm_login_instance.data.local_file.startup: Reading... module.slurm_login.module.slurm_login_instance.module.slurm_login_instance.data.google_compute_zones.available: Reading... module.slurm_login.module.slurm_login_instance.module.slurm_login_instance.data.google_compute_instance_template.base: Reading... module.slurm_login.module.slurm_login_instance.module.slurm_login_instance.data.local_file.startup: Read complete after 0s [id=de68d872e4df054209706dbeee9bfec9dca89970] module.slurm_login.module.slurm_login_instance.module.slurm_login_instance.data.google_compute_instance_template.base: Read complete after 1s [id=projects/ucr-ursa-major-hpc-club/global/instanceTemplates/https://www.googleapis.com/compute/v1/projects/ucr-ursa-major-hpc-club/global/instanceTemplates/alpha9ee24-login-default-20230417223644892600000003] module.slurm_login.module.slurm_login_instance.module.slurm_login_instance.data.google_compute_zones.available: Read complete after 1s [id=projects/ucr-ursa-major-hpc-club/regions/us-central1] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.config: Still creating... [1m50s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.config: Still creating... [2m0s elapsed] module.slurm_controller.module.slurm_controller_instance.google_compute_project_metadata_item.config: Creation complete after 2m3s [id=alpha9ee24-slurm-config] module.slurm_controller.module.slurm_controller_instance.module.slurm_controller_instance.google_compute_instance_from_template.slurm_instance[0]: Creating... module.slurm_controller.module.slurm_controller_instance.module.slurm_controller_instance.google_compute_instance_from_template.slurm_instance[0]: Still creating... [10s elapsed] module.slurm_controller.module.slurm_controller_instance.module.slurm_controller_instance.google_compute_instance_from_template.slurm_instance[0]: Creation complete after 12s [id=projects/ucr-ursa-major-hpc-club/zones/us-central1-a/instances/alpha9ee24-controller] module.slurm_login.module.slurm_login_instance.module.slurm_login_instance.google_compute_instance_from_template.slurm_instance[0]: Creating... module.slurm_login.module.slurm_login_instance.module.slurm_login_instance.google_compute_instance_from_template.slurm_instance[0]: Still creating... [10s elapsed] module.slurm_login.module.slurm_login_instance.module.slurm_login_instance.google_compute_instance_from_template.slurm_instance[0]: Creation complete after 12s [id=projects/ucr-ursa-major-hpc-club/zones/us-central1-a/instances/alpha9ee24-login-4nx80qw2-001] Apply complete! Resources: 29 added, 0 changed, 0 destroyed.