Skip to content

Setup EC2

Michael Andersen edited this page Nov 7, 2016 · 4 revisions

Set up EC2 instances

This stage will walk you through setting up some new EC2 instances to run BTrDB and the Berkeley Smart Grid Store.

Preconditions

This assumes that you already have a VPC created, or are using the default VPC. Please consult the Amazon EC2 VPC documentation if you are unsure.

We also assume, if you are setting up the plotter, that you have purchased a domain name and have it configured on route 53. Please consult the Amazon Route 53 documentation for assistance in setting this up.

Select an AMI

On the Amazon AWS EC2 control panel, select "Instances" and click "Launch Instance". At the time of writing, Ubuntu 16.04 does not appear on the initial screen of available operating systems, so we need to search for it. Select "Community AMIs" on the left menu, and select the Operating System -> Ubuntu, Architecture -> 64 bit, and Root device type -> EBS filters, as shown below:

Then type "16.04 hvm" in to the search bar. As you can see, there will still be a few results left, and they have different dates (they end in a date code). Try pick a newer image, although this is not too important as the quickstart installer will ensure your image is up to date in step 3.

Select an instance type

Next, you will select the instance type to run. A full overview of different EC2 instance types is outside the scope of this guide, but we recommend picking and instance with at least 16 GB of RAM. For technical support and advice in provisioning, please email the BTrDB mailing list (btrdb@googlegroup.com).

Configure instance details

This installer assumes you are launching three or more instances, as this allows you to use Ceph without any additional configuration. Choose how many instances you are launching here, and we will note where you will need to deviate from the instructions later if you choose less than three.

Please ensure that "Auto-assign Public IP" is set to Enable. Depending on your VPC this may default to Disable.

NOTE: If you are following this guide to set up a production cluster that will receive significant data from PMUs, you may wish to create each instance individually rather than all three at once, as the storage requirements for the nodes will differ slightly. This is not particularly relevant for a development or testing cluster.

On this page, you may want to take a note of your VPC subnet so that you can create the appropriate firewall rule for it later.

Configure storage

BTrDB 3.x requires some storage on the root device of the cluter node that will run MongoDB, this is where data from synchrophasors is staged. If you are planning on receiving significant synchrophasor data, please size the root partition of your first node at a few 100 GB (again, please email the mailing list for exact capacity planning information). If you are setting this up as a development cluster that won't receive more than a few weeks of PMU data, you can simply use 100 GB for the root partition of all nodes.

Then, for Ceph, create a few drives that will be used as OSDs. Ceph will work better with at least two drives per node, so rather create multiple smaller drives than one big drive per node. The size of these drives will depend on how much data you intend on storing in BTrDB.

Tagging and security

Feel free to tag your nodes with a descriptive name, the software does not rely on this field.

Then you need to configure your security group, as shown below

The most important rule is the one that allows all traffic in the your VPC subnet's IP block. In the above image it is 10.0.0.0/8, but it could be one of many different private IP blocks such as 172.x or 192.168.x. Double check that this is correct, as without this rule, your servers will not be able to speak to eachother. Then you need to configure the public facing ports.

BTrDB uses ports 4410 and 9000 to provide its API. The plotter uses HTTP (80) and HTTPS (443). The daemon that receives data from the synchrophasors uses port 1883. SSH (22) is required for managing the cluster. Note that the security group can be updated at runtime later.

In a production setup, you may want to limit access to port 4410 and 9000 to known IP addresses, rather than allowing all IP addresses to access them.

Launching

Review the summary screen and click "Launch". You will then need to create and download a public key. The installer will require this key in the next step.

While your servers are launching, click through each instance and note its Private IP and Public IP as indicated below

You will need all of these IPs in Stage 2. Also pick one of the three servers to be your "master" and make a note of which IPs belong to it.

Creating your DNS record

If you are not going to set up the plotter, you can skip this step.

Select Services -> Route 53. Select a hosted zone (refer to AWS documentation if you have not set one up) and click "Create Record Set" to create a DNS entry for your master server. In the picture below, we have bound "plotter.410soda.rocks" to the external IP of the master node.

What's next

Now that you have servers set up, the next step is Setting up the quickstart environment.