Skip to content
This repository has been archived by the owner on Jan 25, 2023. It is now read-only.

Feature/ddb backend #77

Closed
wants to merge 26 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
d6f5711
added checking for supervisor existence before installing
pgr-josh-wells Jul 5, 2018
6544406
added dynamo flags and logic to use either s3 or dynamo
pgr-josh-wells Jul 5, 2018
a8e6b72
correct useradd with --system flag
pgr-josh-wells Jul 5, 2018
6cd9bfa
updated dynamo checking for config generation
pgr-josh-wells Jul 5, 2018
fdc8fac
add dynamo documentation
pgr-josh-wells Jul 5, 2018
6d308d2
tf fmt only
pgr-josh-wells Jul 5, 2018
966df0a
adding standard dynamo table and policy to iam role
pgr-josh-wells Jul 5, 2018
36a74d7
adding dynamo arn output
pgr-josh-wells Jul 5, 2018
0761984
adding vault dynamo variables (read/write and name)
pgr-josh-wells Jul 5, 2018
36e1b94
added tf example for vault with DDB backend. lacking infrastructure …
pgr-josh-wells Jul 5, 2018
8d539cd
updated main comment to specify ddb
Jul 6, 2018
14c4e94
updated README to specify using a vault ami only
Jul 6, 2018
6a55724
updated userdata 'data' block to remove consul variables and add dyna…
Jul 6, 2018
93fb7b6
set consul_storage and vault_storage to and removed redundant logic
Jul 6, 2018
f3020a8
removed logic changing dynamo/s3 values and made just list. Added lo…
Jul 6, 2018
2df7148
added Name tag to DDB
Jul 6, 2018
ffabbb8
updated dyanmo output as concat element
Jul 6, 2018
10b5cb1
updated ddb read/write string variables without quotes
Jul 6, 2018
011d71a
update userdata comment
Jul 6, 2018
0b74d0a
updated enable-dynamo and enable_dynamo to include 'backend' with con…
Jul 6, 2018
7449675
updated enable dynamo flag in userdata
Jul 6, 2018
45f7658
updated readme to include enable-dynamo-backend references
Jul 6, 2018
c5f0603
update generate vault config line to include all as list
Jul 6, 2018
481c675
remove checking supervisor pip
Jul 6, 2018
773d47a
updated to allow s3 with ddb backend
pgr-josh-wells Jul 31, 2018
c8d9fab
tf fmt
pgr-josh-wells Jul 31, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
6 changes: 3 additions & 3 deletions examples/vault-cluster-private/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -74,11 +74,11 @@ data "template_file" "user_data_vault_cluster" {
module "security_group_rules" {
source = "github.com/hashicorp/terraform-aws-consul.git//modules/consul-client-security-group-rules?ref=v0.3.3"

security_group_id = "${module.vault_cluster.security_group_id}"
security_group_id = "${module.vault_cluster.security_group_id}"

# To make testing easier, we allow requests from any IP address here but in a production deployment, we *strongly*
# recommend you limit this to the IP address ranges of known, trusted servers inside your VPC.

allowed_inbound_cidr_blocks = ["0.0.0.0/0"]
}

Expand Down Expand Up @@ -141,4 +141,4 @@ data "aws_subnet_ids" "default" {
vpc_id = "${data.aws_vpc.default.id}"
}

data "aws_region" "current" {}
data "aws_region" "current" {}
43 changes: 43 additions & 0 deletions examples/vault-ddb-backend/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# Vault Cluster with DDB backend example

This folder shows an example of Terraform code to deploy a [Vault](https://www.vaultproject.io/) cluster in
[AWS](https://aws.amazon.com/) using the [vault-cluster module](https://github.com/hashicorp/terraform-aws-vault/tree/master/modules/vault-cluster).
The Vault cluster uses [DynamoDB](https://aws.amazon.com/dynamodb/) as a high-availability storage backend.

This example creates a Vault cluster spread across the subnets in the default VPC of the AWS account. For an example of a Vault cluster
that is publicly accessible, see [vault-cluster-public](https://github.com/hashicorp/terraform-aws-vault/tree/master/examples/vault-cluster-public).

![Vault architecture]()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing image?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not 100% certain where to get a documentation template for Cloudcraft that includes the Hashicorp custom images. If you could point me in the right direction that would be great.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@brikis98 Do you have a template used for previous diagrams?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


You will need to create an [Amazon Machine Image (AMI)](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html)
that has Vault installed, or bootstrap Vault upon launch with UserData.

For more info on how the Vault cluster works, check out the [vault-cluster](https://github.com/hashicorp/terraform-aws-vault/tree/master/modules/vault-cluster) documentation.

**Note**: To keep this example as simple to deploy and test as possible, it deploys the Vault cluster into your default
VPC and default subnets, some of which might be publicly accessible. This is OK for learning and experimenting, but for
production usage, we strongly recommend deploying the Vault cluster into the private subnets of a custom VPC.




## Quick start

To deploy a Vault Cluster:

1. `git clone` this repo to your computer.
1. Optional: build a Vault AMI. See the [vault-consul-ami example](https://github.com/hashicorp/terraform-aws-vault/tree/master/examples/vault-consul-ami) documentation for instructions on how to build an AMI that has both Vault and Consul installed (note that for this example, you'll only need Vault, but having both won't hurt anything).

1. Install [Terraform](https://www.terraform.io/).
1. Open `vars.tf`, set the environment variables specified at the top of the file, and fill in any other variables that
don't have a default. If you built a custom AMI, put the AMI ID into the `ami_id` variable. Otherwise, one of our
public example AMIs will be used by default. These AMIs are great for learning/experimenting, but are NOT
recommended for production use.
1. Run `terraform init`.
1. Run `terraform apply`.
1. Run the [vault-examples-helper.sh script](https://github.com/hashicorp/terraform-aws-vault/tree/master/examples/vault-examples-helper/vault-examples-helper.sh) to
print out the IP addresses of the Vault servers and some example commands you can run to interact with the cluster:
`../vault-examples-helper/vault-examples-helper.sh`.

To see how to connect to the Vault cluster, initialize it, and start reading and writing secrets, head over to the
[How do you use the Vault cluster?](https://github.com/hashicorp/terraform-aws-vault/tree/master/modules/vault-cluster#how-do-you-use-the-vault-cluster) docs.
74 changes: 74 additions & 0 deletions examples/vault-ddb-backend/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
# ---------------------------------------------------------------------------------------------------------------------
# DEPLOY A VAULT SERVER CLUSTER WITH DYNAMODB BACKEND IN AWS
# This is an example of how to use the vault-cluster module to deploy a Vault cluster in AWS. This cluster uses DynamoDB,
# running separately (built within the vault-cluster module), as its storage backend.
# ---------------------------------------------------------------------------------------------------------------------

terraform {
required_version = ">= 0.9.3"
}

# ---------------------------------------------------------------------------------------------------------------------
# DEPLOY THE VAULT SERVER CLUSTER
# ---------------------------------------------------------------------------------------------------------------------

module "vault_cluster" {
# When using these modules in your own templates, you will need to use a Git URL with a ref attribute that pins you
# to a specific version of the modules, such as the following example:
# source = "github.com/hashicorp/terraform-aws-consul.git/modules/vault-cluster?ref=v0.0.1"
source = "../../modules/vault-cluster"

cluster_name = "${var.vault_cluster_name}"
cluster_size = "${var.vault_cluster_size}"
instance_type = "${var.vault_instance_type}"

ami_id = "${var.ami_id}"
user_data = "${data.template_file.user_data_vault_cluster.rendered}"

enable_dynamo_backend = true
dynamo_table_name = "${var.dynamo_table_name}"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice API 👍


vpc_id = "${data.aws_vpc.default.id}"
subnet_ids = "${data.aws_subnet_ids.default.ids}"

# To make testing easier, we allow requests from any IP address here but in a production deployment, we *strongly*
# recommend you limit this to the IP address ranges of known, trusted servers inside your VPC.

allowed_ssh_cidr_blocks = ["0.0.0.0/0"]
allowed_inbound_cidr_blocks = ["0.0.0.0/0"]
allowed_inbound_security_group_ids = []
allowed_inbound_security_group_count = 0
ssh_key_name = "${var.ssh_key_name}"
}

# ---------------------------------------------------------------------------------------------------------------------
# THE USER DATA SCRIPT THAT WILL RUN ON EACH VAULT SERVER WHEN IT'S BOOTING
# This script will configure and start Vault
# ---------------------------------------------------------------------------------------------------------------------

data "template_file" "user_data_vault_cluster" {
template = "${file("${path.module}/user-data-vault.sh")}"

vars {
aws_region = "${data.aws_region.current.name}"
dynamo_table_name = "${var.dynamo_table_name}"
}
}

# ---------------------------------------------------------------------------------------------------------------------
# DEPLOY THE CLUSTERS IN THE DEFAULT VPC AND AVAILABILITY ZONES
# Using the default VPC and subnets makes this example easy to run and test, but it means Consul and Vault are
# accessible from the public Internet. In a production deployment, we strongly recommend deploying into a custom VPC
# and private subnets.
# ---------------------------------------------------------------------------------------------------------------------

data "aws_vpc" "default" {
default = "${var.vpc_id == "" ? true : false}"
id = "${var.vpc_id}"
}

data "aws_subnet_ids" "default" {
vpc_id = "${data.aws_vpc.default.id}"
}

data "aws_region" "current" {}
43 changes: 43 additions & 0 deletions examples/vault-ddb-backend/outputs.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
output "asg_name_vault_cluster" {
value = "${module.vault_cluster.asg_name}"
}

output "launch_config_name_vault_cluster" {
value = "${module.vault_cluster.launch_config_name}"
}

output "iam_role_arn_vault_cluster" {
value = "${module.vault_cluster.iam_role_arn}"
}

output "iam_role_id_vault_cluster" {
value = "${module.vault_cluster.iam_role_id}"
}

output "security_group_id_vault_cluster" {
value = "${module.vault_cluster.security_group_id}"
}

output "aws_region" {
value = "${data.aws_region.current.name}"
}

output "vault_servers_cluster_tag_key" {
value = "${module.vault_cluster.cluster_tag_key}"
}

output "vault_servers_cluster_tag_value" {
value = "${module.vault_cluster.cluster_tag_value}"
}

output "ssh_key_name" {
value = "${var.ssh_key_name}"
}

output "vault_cluster_size" {
value = "${var.vault_cluster_size}"
}

output "dynamo_table_arn" {
value = "${module.vault_cluster.dynamo_table_arn}"
}
18 changes: 18 additions & 0 deletions examples/vault-ddb-backend/user-data-vault.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
#!/bin/bash
# This script is meant to be run in the User Data of each EC2 Instance while it's booting. The script uses the
# run-vault script to configure and start
# Vault in server mode. Note that this script assumes it's running in an AMI built from the Packer template in
# examples/vault-consul-ami/vault-consul.json.

set -e

# Send the log output from this script to user-data.log, syslog, and the console
# From: https://alestic.com/2010/12/ec2-user-data-output/
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1

# The Packer template puts the TLS certs in these file paths
readonly VAULT_TLS_CERT_FILE="/opt/vault/tls/vault.crt.pem"
readonly VAULT_TLS_KEY_FILE="/opt/vault/tls/vault.key.pem"

# The variables below are filled in via Terraform interpolation
/opt/vault/bin/run-vault --tls-cert-file "$VAULT_TLS_CERT_FILE" --tls-key-file "$VAULT_TLS_KEY_FILE" --enable-dynamo-backend --dynamo-table "${dynamo_table_name}" --dynamo-region "${aws_region}"
51 changes: 51 additions & 0 deletions examples/vault-ddb-backend/variables.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# ---------------------------------------------------------------------------------------------------------------------
# ENVIRONMENT VARIABLES
# Define these secrets as environment variables
# ---------------------------------------------------------------------------------------------------------------------

# AWS_ACCESS_KEY_ID
# AWS_SECRET_ACCESS_KEY
# AWS_DEFAULT_REGION

# ---------------------------------------------------------------------------------------------------------------------
# REQUIRED PARAMETERS
# You must provide a value for each of these parameters.
# ---------------------------------------------------------------------------------------------------------------------

variable "ami_id" {
description = "The ID of the AMI to run in the cluster. This should be an AMI built from the Packer template under examples/vault-consul-ami/vault-consul.json."
}

variable "ssh_key_name" {
description = "The name of an EC2 Key Pair that can be used to SSH to the EC2 Instances in this cluster. Set to an empty string to not associate a Key Pair."
}

# ---------------------------------------------------------------------------------------------------------------------
# OPTIONAL PARAMETERS
# These parameters have reasonable defaults.
# ---------------------------------------------------------------------------------------------------------------------

variable "vault_cluster_name" {
description = "What to name the Vault server cluster and all of its associated resources"
default = "vault-ddb-example"
}

variable "vault_cluster_size" {
description = "The number of Vault server nodes to deploy. We strongly recommend using 3 or 5."
default = 3
}

variable "vault_instance_type" {
description = "The type of EC2 Instance to run in the Vault ASG"
default = "t2.micro"
}

variable "vpc_id" {
description = "The ID of the VPC to deploy into. Leave an empty string to use the Default VPC in this region."
default = ""
}

variable "dynamo_table_name" {
description = "The name of an dynamo table to create and use as a storage backend (if configured). Note: Consul will not be configured"
default = "my-vault-table"
}
6 changes: 3 additions & 3 deletions examples/vault-s3-backend/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -79,11 +79,11 @@ data "template_file" "user_data_vault_cluster" {
module "security_group_rules" {
source = "github.com/hashicorp/terraform-aws-consul.git//modules/consul-client-security-group-rules?ref=v0.3.3"

security_group_id = "${module.vault_cluster.security_group_id}"
security_group_id = "${module.vault_cluster.security_group_id}"

# To make testing easier, we allow requests from any IP address here but in a production deployment, we *strongly*
# recommend you limit this to the IP address ranges of known, trusted servers inside your VPC.

allowed_inbound_cidr_blocks = ["0.0.0.0/0"]
}

Expand Down Expand Up @@ -146,4 +146,4 @@ data "aws_subnet_ids" "default" {
vpc_id = "${data.aws_vpc.default.id}"
}

data "aws_region" "current" {}
data "aws_region" "current" {}
2 changes: 1 addition & 1 deletion examples/vault-s3-backend/outputs.tf
Original file line number Diff line number Diff line change
Expand Up @@ -84,4 +84,4 @@ output "consul_cluster_cluster_tag_value" {

output "s3_bucket_arn" {
value = "${module.vault_cluster.s3_bucket_arn}"
}
}
2 changes: 1 addition & 1 deletion examples/vault-s3-backend/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -73,4 +73,4 @@ variable "s3_bucket_name" {
variable "force_destroy_s3_bucket" {
description = "If you set this to true, when you run terraform destroy, this tells Terraform to delete all the objects in the S3 bucket used for backend storage (if configured). You should NOT set this to true in production or you risk losing all your data! This property is only here so automated tests of this module can clean up after themselves."
default = false
}
}
6 changes: 3 additions & 3 deletions main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -117,11 +117,11 @@ data "template_file" "user_data_vault_cluster" {
module "security_group_rules" {
source = "github.com/hashicorp/terraform-aws-consul.git//modules/consul-client-security-group-rules?ref=v0.3.3"

security_group_id = "${module.vault_cluster.security_group_id}"
security_group_id = "${module.vault_cluster.security_group_id}"

# To make testing easier, we allow requests from any IP address here but in a production deployment, we *strongly*
# recommend you limit this to the IP address ranges of known, trusted servers inside your VPC.

allowed_inbound_cidr_blocks = ["0.0.0.0/0"]
}

Expand Down Expand Up @@ -223,4 +223,4 @@ data "aws_subnet_ids" "default" {
tags = "${var.subnet_tags}"
}

data "aws_region" "current" {}
data "aws_region" "current" {}
20 changes: 18 additions & 2 deletions modules/run-vault/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,22 +63,30 @@ The `run-vault` script accepts the following arguments:
* `user` (optional): The user to run Vault as. Default is to use the owner of `config-dir`.
* `skip-vault-config` (optional): If this flag is set, don't generate a Vault configuration file. This is useful if you
have a custom configuration file and don't want to use any of of the default settings from `run-vault`.
* `--enable-s3-backend` (optional): If this flag is set, an S3 backend will be enabled in addition to the HA Consul backend.
* `--enable-s3-backend` (optional): Cannot be set with `--enable-dynamo-backend`. If this flag is set, an S3 backend will be enabled in addition to the HA Consul backend.
* `--s3-bucket` (optional): Specifies the S3 bucket to use to store Vault data. Only used if `--enable-s3-backend` is set.
* `--s3-bucket-region` (optional): Specifies the AWS region where `--s3-bucket` lives. Only used if `--enable-s3-backend` is set.
* `--enable-dynamo-backend` (optional): Cannot be set with `--enable-s3-backend`. If this flag is set, a DynamoDB backend will be enabled. Consul will __NOT__ be enabled as a backend.
* `--dynamo-table` (optional): Specifies the DynamoDB table to use to store Vault data. Only used if `--enable-dynamo-backend` is set.
* `--dynamo-region` (optional): Specifies the AWS region where `--dynamo-table` lives. Only used if `--enable-dynamo-backend` is set.

Example:

```
/opt/vault/bin/run-vault --tls-cert-file /opt/vault/tls/vault.crt.pem --tls-key-file /opt/vault/tls/vault.key.pem
```

Or if you want to enable an S3 backend:
If you want to enable an S3 backend:

```
/opt/vault/bin/run-vault --tls-cert-file /opt/vault/tls/vault.crt.pem --tls-key-file /opt/vault/tls/vault.key.pem --enable-s3-backend --s3-bucket my-vault-bucket --s3-bucket-region us-east-1
```

OR if you want to enable DynamoDB backend:

```
/opt/vault/bin/run-vault --tls-cert-file /opt/vault/tls/vault.crt.pem --tls-key-file /opt/vault/tls/vault.key.pem --enable-dynamo-backend --dynamo-table my-dynamo-table --dynamo-region us-east-1
```


## Vault configuration
Expand Down Expand Up @@ -134,6 +142,14 @@ available.
* [region](https://www.vaultproject.io/docs/configuration/storage/s3.html#region): Set to the `--s3-bucket-region`
parameter.

* [storage](https://www.vaultproject.io/docs/configuration/index.html#storage): Set the `--enable-dynamo-backend` flag to
configure DynamoDB as the main (HA) storage backend for Vault:

* [table](https://www.vaultproject.io/docs/configuration/storage/dynamodb.html#table): Set to the `--dynamo-table`
parameter.
* [region](https://www.vaultproject.io/docs/configuration/storage/dynamodb.html#region): Set to the `--dynamo-region`
parameter.

### Overriding the configuration

To override the default configuration, simply put your own configuration file in the Vault config folder (default:
Expand Down