Skip to content
This repository has been archived by the owner on Mar 20, 2020. It is now read-only.

Notes on Terraform with Azure

Peter Burkholder (@pburkholder) edited this page Dec 23, 2016 · 8 revisions

Getting started w/ Azure & Terraform

Terraform does not yet (as of 23 Dec 2016) work with USGov: https://github.com/hashicorp/terraform/issues/10895

The terraform azurerm provider page at https://www.terraform.io/docs/providers/azurerm/ buries the lede, namely that setting up a service principal for Azure with the CLI is dead simple, namely:

  • install az-cli (note, it won't work unless you have version newer than v0.1.0b11, as the USGov endpoints were incorrect before that)
  • Use az configure to set up for work with USGov (or edit ~/.azure/context_config/default) to contain:
[context]
#cloud = AzureCloud
cloud = AzureUSGovernment
  • authenticate az login
  • create a "Contributor" service principal: az ad sp create-for-rbac --role="Contributor", which generate something like:
{
  "appId": "b7b3e331-e95c-4847-b0ee-7022a1e7c1ee",
  "name": "http://azure-cli-2016-12-22-17-43-01",
  "password": "be608339-f306-49c6-a652-7a4ef7389d10",
  "tenant": "bfdb5536-2a6f-4d1a-a390-658ee677f796"
}
  • create something.tf file with:
provider "azurerm" {
  subscription_id = "(your subscription id)"
  tenant_id = "(the `tenant` field from above JSON)"
  client_id = "(the `appId` field from above JSON)"
  client_secret = "(the `password` field from above JSON)"
}
  • Testing:
    • az group create -l usgovvirginia -n MyRG
    • az group delete -n MyRG
  • Deleting the service principal: az ad app delete --id (client GUID)

A few tips

az is buggy, but has nicer features (like create-for-rbac). I couldn't get az login to use my correct Azure account, so I used azure login (from az-xplat-cli) to login to the correct account, then used the az commands with that authentication

with Azure and storing state

From Slack on 13 Dec 2016:


Peter Burkholder (DCA, he, 🚴🏼) [10:48 AM]
We we're having a conversation about Terraform in our #ts-tsa-cloud channel, and we're wondering if there are accepted patterns for managing the TF state file at 18F. For those of you wondering what I mean,

[10:49]
When you spin up infra w/ Terraform, you'll end up with a .tfstate file that encodes all of the infrastructure you just spun up.

[10:50]
So if you want to modify the infra later, terraform can plan around the existing state, especially if parts of it are not fully captured by API calls to the provider

[10:51]
To collaborate on managing that collection of infra, the team needs to be able to share that state in a rational way.

[10:52]
Terraform's Atlas product is supposed to make that possible

[10:52]
It may be a completely solved problem, but I thought I'd ask.

[10:52]
(I'm about a year behind on my knowledge of Terraform internals)

Clint Troxel (JAC, ⛷) [10:58 AM]
Last time I looked, storing in S3 seemed recommended. But now I see people using Vault to encrypt locally. Wonders if anyone has experience using Vault like this

M. Adam Kendall (Salem VA) [10:58 AM]
@peterb we use S3 storage on cloud.gov

Peter Burkholder (DCA, he, 🚴🏼) [10:59 AM]
@clint by locally, do you mean stored distributed across a Vault within your infra?

Clint Troxel (JAC, ⛷) [11:00 AM]
I ran across something where users were using some Vault “encryption as a service” (I guess so people can “share”), and then just checking the file into source control.

[11:01]
Well, I mis-remember/mis-understand. It’s “Tranist backend”, which I don’t really get: https://www.vaultproject.io/docs/secrets/transit/ vaultproject.io Secret Backend: Transit - Vault by HashiCorp The transit secret backend for Vault encrypts/decrypts data in-transit. It doesn't store any secrets.

M. Adam Kendall (Salem VA) [11:01 AM]
@peterb for ref our terraform apply script that sets up our remote state: https://github.com/18F/cg-pipeline-tasks/blob/master/terraform-apply.sh#L37 GitHub 18F/cg-pipeline-tasks cg-pipeline-tasks - Concourse common tasks

Clint Troxel (JAC, ⛷) [11:02 AM]
Oh, @adamkendall here’s the article I’m remembering: https://opencredo.com/securing-terraform-state-with-vault/ OpenCredo Securing Terraform state with Vault - OpenCredo How to use Vault's' encryption as a service functionality' via terrahelp, a go based CLI tool, to secure terraform state files April 2nd at 6:57 PM

Jez Humble (DC, he) [11:34 AM]
S3 is the standard way to store tfstate if you’re terraforming AWS

[11:34]
https://www.terraform.io/docs/state/remote/s3.html terraform.io Remote State Backend: s3 - Terraform by HashiCorp Terraform can store the state remotely, making it easier to version and work with in a team.

[11:35]
there is an Azure remote state backend too: https://www.terraform.io/docs/state/remote/azure.html terraform.io Remote State Backend: azure - Terraform by HashiCorp Terraform can store the state remotely, making it easier to version and work with in a team.

[snip ...]

Peter Burkholder (DCA, he, 🚴🏼) [12:34 PM]
Charity has some good posts on Terraform, e.g.: https://charity.wtf/2016/03/30/terraform-vpc-and-why-you-want-a-tfstate-file-per-env/ charity.wtf mipsytipsy Terraform, VPC, and why you want a tfstate file per env How to blow up your entire infrastructure with this one great trick! Or, how you can isolate the blast radius of terraform explosions by using a separate state file per environment.

[snip]

M. Adam Kendall (Salem VA) [12:42 PM]
@peterb btw, this is why we have our terraform split into modules, and stacks (our envs) https://github.com/18F/cg-provision/tree/master/terraform GitHub 18F/cg-provision cg-provision - cloud.gov infrastructure provisioning and deployment

[snip]

Jay Huie (DC) [12:43 PM]
@peterb - Re: TFState — Jez’s pattern was to keep it in S3 and let the CI/CD pull the trigger on it

Tim Spencer [12:58 PM]
re: terraform: Another good practice that seems obvious, but many folks seem to not end up doing is to make all environments as standalone as possible. Sharing things (like chef-server, DNS) between environments seems like a good thing to do, but it means that you have to hardcode in stuff on both ends so that you can access resources in other environments, it provides a path for attackers to lateral between environments, and it means that it’s harder to give full control of environments to people who you aren’t able to give production access to as well.

Tim Spencer [1:04 PM]
Something that seems to be working in login.gov is to make use of the AWS VPC internal dns stuff so that internal things can all talk to each other using the same names in each environment (like chef.login.gov.internal and elk.login.gov.internal), and to have all the infrastructure built out in each environment with code (elk, jenkins, chef, so far).

Tim Spencer [1:11 PM]
Not sure if that helps or not, but I’m really liking making everything standalone. In my previous gig, I had all sorts of problems because we had a centralized chef-server for all environments, and it meant we needed to write all sorts of administrative tools that constrained people’s behaviors so that they were approved types of changes, signed off on, etc. It was a pain to build/manage all that tooling, but we had to, because FEDRAMP and SOC2 both required us to have controls on who could make changes to the production environments, yet we still wanted engineers to be able to make changes to their dev/qa environments. I would much rather us control this kind of access on a per-environment basis, so that we can use the tools in an uninhibited fashion.

M. Adam Kendall (Salem VA) [1:23 PM]
@tspencer cloud.gov handles the FedRAMP aspects by controlling who can commit/merge to our GitHub repos, and who has access to our CI/CD server. No manual deployments. If it doesn’t come through git and CI/CD server, it doesn’t go out.

Tim Spencer [1:28 PM]
Right. We are moving towards that too. I’ve got our CI/CD servers set up in each environment, and we will be only allowing changes from them, and we control access to those servers on a per-environment basis. We aren’t quite there yet, but it’s clearly the right direction to go.

Peter Burkholder (DCA, he, 🚴🏼) [2:03 PM]
g-devops is a great place to work! Thanks - I'd not brought up the idea of standalone environments. but this helps me better articulate the arguments around that.

Bret Mogilefsky (SF | he/him) [2:35 PM]
The nice thing about doing things this way is that you can meet compliance requirements simply by controlling who has access to which branches in GitHub, setting branches to be protected and require review, etc. That’s saved us enormous effort on cloud.gov, and is a huge leg up on the change-control-review-boards, etc. that FISMA seems to have been designed around. Tim Spencer Right. We are moving towards that too. I’ve got our CI/CD servers set up in each environment, and we will be only allowing changes from them, and we control access to those servers on a per-environment basis. We aren’t quite there yet, but it’s clearly the right direction to go. Posted in #g-devopsYesterday at 1:28 PM

[2:35]
From GitHub out, it’s robots all the way down.