Skip to content
Permalink
Browse files
Terraform config fixes based on testing (#207)
* Terraform config fixes based on testing

* Update AzureRM Terraform provider to use any 3.x version.
* Fix bug where variable validation would fail for the Azure
  configuration if managed_disk_configuration was not used and set to
  null.
* Clarify in the README that terraform init/plan/apply commands are
  sensitive to the current working directory.
* Clarify in the README that the appropriate SSH private key must be
  present in the SSH agent in order for terraform apply to succeed.
* Fix step numbering in QUICKSTART
* Remove sentence fragment about creating config from step 4 of
  QUICKSTART
* Add note about needing the specified ssh key to be loaded in your
  ssh-agent for provisioning to succeed
  • Loading branch information
brianloss committed May 12, 2022
1 parent 0373c3b commit 3da41348a6af77e6cdd89b1d5d1bdd3f14ef029e
Showing 5 changed files with 27 additions and 17 deletions.
@@ -35,7 +35,7 @@
You will need to create a configuration file that includes values for the
variables that do not have a default value. See the Variables section in
the README. For example, you can create a file "aws.auto.tfvars" file in
the aws directory with the following content (replace as appropriate):
the `aws` directory with the following content (replace as appropriate):

create_route53_records = "true"
private_network = "true"
@@ -53,17 +53,17 @@ authorized_ssh_keys = [
]


3. Create the Resources
4. Create the Resources

cd aws

Create the configuration section of the README. For example you can create
Example in HCL syntax:
NOTE: ensure that the private key corresponding to the first ssh key in
`authorized_ssh_keys` in the configuration above has been loaded
into your ssh agent, or else terraform apply will fail.

cd aws
terraform init --backend-config=bucket=<bucket-name-goes-here>
terraform apply

4. Accessing the cluster
5. Accessing the cluster

The output of the apply step above will include the IP addresses of the
resources that were created. If created correctly, you should be able to
@@ -64,7 +64,9 @@ about this see [remote state](https://www.terraform.io/docs/language/state/remot
shared state instructions are based on
[this article](https://blog.gruntwork.io/how-to-manage-terraform-state-28f5697e68fa).

To generate the storage, run `terraform init` followed by `terraform apply`.
To generate the storage, run `terraform init` followed by `terraform apply`. Note that the shell
working directory must be the `shared_state/aws` or `shared_state/azure` directory when you run
the terraform commands for shared state creation.

The default AWS configuration generates the S3 bucket name when `terraform apply` is run. This
ensures that a globally unique S3 bucket name is used. It is not required to set any variables for
@@ -415,7 +417,9 @@ recommended that the public IP addresses be used instead.

## Instructions

1. Once you have created a `.auto.tfvars.json` file, or set the properties some other way, run
1. Change to either the `aws` or `azure` directory in your shell. This must be the current
directory when you run the following `terraform` commands.
2. Once you have created a `.auto.tfvars` file, or set the properties some other way, run
`terraform init`. If you have modified shared_state backend configuration over the default,
you can override the values here. For example, the following configuration updates the
`resource_group_name` and `storage_account_name` for the `azurerm` backend:
@@ -424,8 +428,14 @@ recommended that the public IP addresses be used instead.
```
Once values are supplied to `terraform init`, they are stored in the local state and it is not
necessary to supply these overrides to the `terraform apply` or `terraform destroy` commands.
2. Run `terraform apply` to create the AWS/Azure resources.
3. Run `terraform destroy` to tear down the AWS/Azure resources.
3. Ensure that the private key associated with the first public SSH key listed for the value
of either `authorized_ssh_keys` or `authorized_ssh_key_files` in your `.auto.tfvars` file
is loaded into your SSH agent. During resource creation, Terraform will connect to the newly
created VMs using SSH in order copy files and configure the VMs to run Accumulo. If the
appropriate private key is not available to your SSH agent, then the connection will fail and
resource creation will eventually fail.
4. Run `terraform apply` to create the AWS/Azure resources.
5. Run `terraform destroy` to tear down the AWS/Azure resources.

**NOTE**: If you are working with `aws` and get an Access Denied error then try setting the AWS
Short Term access keys in your environment
@@ -49,7 +49,7 @@ terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.91.0"
version = "~> 3.0"
}
}
backend "azurerm" {
@@ -138,19 +138,19 @@ variable "managed_disk_configuration" {
nullable = true

validation {
condition = var.managed_disk_configuration.mount_point != null
condition = var.managed_disk_configuration == null || can(var.managed_disk_configuration.mount_point != null)
error_message = "The mount point must be specified."
}
validation {
condition = var.managed_disk_configuration.disk_count > 0
condition = var.managed_disk_configuration == null || can(var.managed_disk_configuration.disk_count > 0)
error_message = "The number of disks must be at least 1."
}
validation {
condition = contains(["Standard_LRS", "StandardSSD_LRS", "Premium_LRS"], var.managed_disk_configuration.storage_account_type)
condition = var.managed_disk_configuration == null || can(contains(["Standard_LRS", "StandardSSD_LRS", "Premium_LRS"], var.managed_disk_configuration.storage_account_type))
error_message = "The storage account type must be one of 'Standard_LRS', 'StandardSSD_LRS', or 'Premium_LRS'."
}
validation {
condition = var.managed_disk_configuration.disk_size_gb > 0 && var.managed_disk_configuration.disk_size_gb <= 32767
condition = var.managed_disk_configuration == null || can(var.managed_disk_configuration.disk_size_gb > 0 && var.managed_disk_configuration.disk_size_gb <= 32767)
error_message = "The disk size must be at least 1GB and less than 32768GB."
}
}
@@ -19,7 +19,7 @@ terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.91.0"
version = "~> 3.0"
}
}
}

0 comments on commit 3da4134

Please sign in to comment.