Skip to content
Branch: master
Find file Copy path
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
241 lines (178 sloc) 7.81 KB

Salt Terraform Provider

alpha Build Status Coverage Status

A Terraform provider serving as an interop layer for an Terraform roster module that is (not upstream yet](

This provider is derived from and inspired by terraform-provider-ansible. Read the introductory blog post for an explanation of the design motivations behind the original ansible provider.

Table of Content


Builds for openSUSE, CentOS, Ubuntu, Fedora are created with openSUSE's OBS. The build definitions are available for both the stable and master branches.

Using published binaries/builds

Using packages

Follow the instructions for your distribution:

Building from source

This project uses glide to vendor all its dependencies.

You do not have to interact with glide since the vendored packages are already included in the repo.

Ensure you have the latest version of Go installed on your system, terraform usually takes advantage of features available only inside of the latest stable release.

go get
cd $GOPATH/src/
go install

You will now find the binary at $GOPATH/bin/terraform-provider-salt.


Copied from the Terraform documentation:

To install a plugin, put the binary somewhere on your filesystem, then configure Terraform to be able to find it. The configuration where plugins are defined is ~/.terraformrc for Unix-like systems and %APPDATA%/terraform.rc for Windows.

Using the provider

Terraform Configuration Example

resource "libvirt_domain" "domain" {
  name = "domain-${count.index}"
  memory = 1024
  disk {
       volume_id = "${element(libvirt_volume.volume.*.id, count.index)}"

  network_interface {
    network_name = "default"
    hostname = "minion${count.index}"
    wait_for_lease = 1
  cloudinit = "${}"
  count = 2

resource "salt_host" "example" {
    host = "${libvirt_domain.domain.network_interface.0.addresses.0}"

Setting up Salt

The goal is to create a self-contained folder where you will store both the terraform file describing the infrastructure and the Salt states to configure them.

├── etc
│   └── salt
│       ├── master
│       └── pki
│           └── master
│               └── ssh
│                   ├── salt-ssh.rsa
│                   └──
├── Saltfile
└── srv
    ├── pillar
    │   ├── terraform.sls
    │   └── top.sls
    └── salt
        ├── master
        │   └── init.sls
        ├── minion
        │   └── init.sls
        ├── minion-ssh
        │   └── init.sls
        └── top.sls
  • As Salt will create several files once you run it, make sure your .gitignore is good enough to avoid checking in generated files:
  • Saltfile should point salt to the local folder configuration:
  config_dir: etc/salt
  max_procs: 30
  wipe_ssh: True
  • etc/salt/master should let salt-ssh know that the states and pillar are also stored in the same folder, and should enable the terraform roster.
root_dir: .
    - srv/salt
    - srv/pillar
roster: terraform

NOTE: The roster module may not be upstream yet.

Giving salt-ssh access to terraform resources via ssh

Salt by default uses the keys in etc/salt/pki/master. You can pre-generate those with ssh-keygen.

For this, you can use something like cloud-init to pre-configure your Terraform resources to pre-authorize the salt-ssh key. With terraform-provider-libvirt you can achieve this by using a cloud-init resource:

resource "libvirt_cloudinit" "common_init" {
  name = "test-init.iso"
  user_data = <<EOF
disable_root: 0
ssh_pwauth:   1
  - name: root
      - ${file("etc/salt/pki/master/ssh/")}

And then referencing this resource from each virtual machine:

  cloudinit = "${}"

For AWS resources, you can pass the cloud-init configuration using user_data (Documentation).

Passing information from Terraform to Salt via Pillar

Sometimes you need to use infrastructure data in the Salt states. For example, the amount of resources of certain type or the ip address of some resource. For this you can put it into the pillar.

We would like to add some pillar integration at the resource level later. For now you can use local_file resources to write pillar sls files:

resource "local_file" "pillar_database_cluster" {
  filename = "${path.module}/srv/pillar/terraform_database_cluster.sls"
  content = <<EOF
  database_master_ip: ${}

Then include this pillar in the virtual machines that should receive it by editing srv/pillar/top.sls:

    - terraform_database_cluster

As this pillar file is generated, make sure you include it in .gitignore:


See more advanced examples.

Testing that everything works

If everything is in place, you can start managing the resources with Salt:

salt-ssh '*'

You can also run salt-ssh '*' pillar.items to check the machines receive the right pillar data, and salt-ssh '*' state.apply to apply the state.


See also the list of contributors who participated in this project.

This provider is derived/forked from terraform-provider-ansible.


Contributions specific to this project are made available under the Mozilla Public License.

Code under the vendor/ directory is copyright of the various package owners, and made available under their own license considerations.

You can’t perform that action at this time.