Permalink
Show file tree
Hide file tree
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Browse files
Terraform configurations for creating test infrastructure (#185)
This change includes Terraform configurations for creating and destroying compute resources for testing on AWS and Azure. The configurations install ZooKeeper, Hadoop, Accumulo, and Accumulo-Testing. Users can supply options for the versions of the software that should be installed or can supply their own binary tarballs for installation. See the README for detailed documentation. Co-authored-by: Brian Loss <brianloss@gmail.com> Co-authored-by: domgarguilo <dominic.garguilo@gmail.com>
- Loading branch information
Showing
40 changed files
with
6,060 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
@@ -0,0 +1,7 @@ | ||
**/.terraform.lock.hcl | ||
**/.terraform/ | ||
conf/ | ||
**/terraform.tfstate | ||
**/terraform.tfstate.backup | ||
**/*.auto.tfvars.json | ||
**/*.auto.tfvars |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
@@ -0,0 +1,74 @@ | ||
# | ||
# Licensed to the Apache Software Foundation (ASF) under one or more | ||
# contributor license agreements. See the NOTICE file distributed with | ||
# this work for additional information regarding copyright ownership. | ||
# The ASF licenses this file to You under the Apache License, Version 2.0 | ||
# (the "License"); you may not use this file except in compliance with | ||
# the License. You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
# | ||
|
||
1. Download and Install Terraform | ||
|
||
wget https://releases.hashicorp.com/terraform/1.1.5/terraform_1.1.5_linux_amd64.zip | ||
unzip into /usr/local/bin | ||
|
||
2. Create the Shared State | ||
|
||
NOTE: You only need to do this once. If you are sharing the cluster with a team, | ||
then only one person needs to do it and they need to share the bucket with | ||
the other team members. | ||
|
||
cd shared_state/aws | ||
terraform init | ||
terraform apply | ||
|
||
3. Create the Configuration | ||
|
||
You will need to create a configuration file that includes values for the | ||
variables that do not have a default value. See the Variables section in | ||
the README. For example, you can create a file "aws.auto.tfvars" file in | ||
the aws directory with the following content (replace as appropriate): | ||
|
||
create_route53_records = "true" | ||
private_network = "true" | ||
accumulo_root_password = "secret" | ||
security_group = "sg-ABCDEF001" | ||
route53_zone = "some.domain.com" | ||
us_east_1b_subnet = "subnet-ABCDEF123" | ||
us_east_1e_subnet = "subnet-ABCDEF124" | ||
ami_owner = "000000000001" | ||
ami_name_pattern = "MY_AMI_*" | ||
authorized_ssh_keys = [ | ||
"ssh-rsa .... user1", | ||
"ssh-rsa .... user2", | ||
"ssh-rsa .... user3" | ||
] | ||
|
||
|
||
3. Create the Resources | ||
|
||
cd aws | ||
|
||
Create the configuration section of the README. For example you can create | ||
Example in HCL syntax: | ||
|
||
terraform init --backend-config=bucket=<bucket-name-goes-here> | ||
terraform apply | ||
|
||
4. Accessing the cluster | ||
|
||
The output of the apply step above will include the IP addresses of the | ||
resources that were created. If created correctly, you should be able to | ||
ssh to the nodes using "ssh hadoop@ip". If you created DNS addresses for | ||
the nodes, then you should be able to ssh using those addresses also. You | ||
should also be able to access the web pages (see the "Accessing Web | ||
Pages" section of the README for ports) | ||
|
Large diffs are not rendered by default.
Oops, something went wrong.
Oops, something went wrong.