Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide a way to trigger a re-provision of a resource #3193

Closed
sethvargo opened this issue Sep 8, 2015 · 10 comments
Closed

Provide a way to trigger a re-provision of a resource #3193

sethvargo opened this issue Sep 8, 2015 · 10 comments

Comments

@sethvargo
Copy link
Contributor

In an ideal world, provisioners are inherent to the immutable resource (AMI), and changing a provisioner would mean simply replacing the AMI entirely.

While this is certainly the most ideal scenario, there exists a much more common scenario where there is a desire to provision (or re-provision) one or more instances. This is especially helpful for development/iteration where waiting for an entire cluster to taint and recreate itself is time consuming and not an ideal workflow.

Notes:

  • Idempotency is the job of script itself; Terraform will just re-run the provisioners as if everything were fresh
  • Should not support provisioner targeting; all provisioner scripts on the resource are run in the order specified
  • Should support resource targeting, including splat (terraform provision aws_instance.web.*)

I would be happy to flush out the API a bit more, but I wanted to open a ticket for early discussion before going too far.

/cc @phinze @catsby

@phinze
Copy link
Contributor

phinze commented Sep 17, 2015

Yeah this makes sense. We should support the common workflow of iterating on provisioners, and this feature seems like a relatively simple way to do it. Tagged.

@apparentlymart
Copy link
Member

In certain cases it's possible to "fake" this using null_resource:

resource "aws_instance" "foo" {
    // ...
}

resource "null_resource" "baz" {
    connection {
        user = "ubuntu"
        private_key = "..."
        host = "${aws_instance.foo.private_ip}"
    }
    provisioner "remote-exec" {
        // ... etc
    }
}

With this in place, one can taint null_resource.baz to get that provisioner to re-run on the next apply without rebuilding the instance.

It's also possible to add a triggers attribute to the null_resource so that it will re-run automatically when certain attributes change. At work we are currently using this to run consul join on our Consul cluster each time the set of all Consul server IP addresses changes, so rebuilding a single server will automatically add the replacement server to the cluster.

@ghost
Copy link

ghost commented Apr 9, 2016

As per @apparentlymart's suggestion, here're my use-case details:

I'm using Salt (in a masterless configuration) to provision a node at runtime with a few remote-exec provisioners (used to bootstrap Salt, to create new directories and tell Salt to look for the state tree in the local file system). There're also a number of file directives whose purpose is to create new directories and copy files into the node, such as one minion and one top.sls, as well as a number of init.sls files. Salt will then apply all declared states, which include installing nginx, a number of php and database-related packages, as well as managing a number of files, symlinks, etc.

Currently, when I commit a change to my server's configuration or need to install new software I have to destroy the whole infrastructure and then apply the new plan. It doesn't matter if what I want to change is just a single line, in nginx.conf for instance, as I will need to destroy the whole thing. It would be great if there was an equivalent to vagrant provision.

@passcod
Copy link

passcod commented Sep 27, 2016

@pierrebonbon Wouldn't it make more sense, in this case, to use a master_ful_ Salt setup and the null_resource pattern above to run a Salt highstate targeted to the changed machine from triggers?

@tomwganem
Copy link

+1 on this. I use chef to provision all of my vms, and occasionally, the provision step will fail, which ultimately means that terraform will list the resource as tainted and will need to destroy and re-create it. A huge time waster.

@ColOfAbRiX
Copy link

Absolutely. I understand the resources should be immutabile, but providing a solution for debugging and development purposes would be exteremely useful.

@korotovsky
Copy link
Contributor

korotovsky commented May 9, 2017

Hi,

I implemented http data-source in this PR. We use this for getting current version of Ansible playbooks for particular microservice and null_recourse + triggers to provision them. There is an example:

data "http" "example" {
  url = "https://checkpoint-api.hashicorp.com/v1/check/terraform"

  # Optional request headers
  request_headers {
    "Accept" = "application/json"
  }
}

resource "aws_instance" "ec2" {
  # ...
}

resource "null_resource" "ec2-provisioner" {
  triggers {
    version = "${data.http.example.body}"
  }

  provisioner "remote-exec" {
    connection {
      # ...
    }

    inline = [
      "ansible-playbook -i inventory playbook.yml --extra-vars 'foo=bar'",
    ]
  }
}

So, at the end Terraform will trigger Ansible only when metadata at https://checkpoint-api.hashicorp.com/v1/check/terraform has been changed.

Or for "development mode" you could use version = "${timestamp()}" approach.

@jimmycuadra
Copy link
Contributor

@sethvargo Why was this closed?

bmcustodio pushed a commit to bmcustodio/terraform that referenced this issue Sep 26, 2017
@raphink
Copy link
Contributor

raphink commented Jan 9, 2019

I still believe a terraform provision -target <some.target> to relaunch provisioning of a resource would be a great addition to Terraform to get rid of null_resource hacks and their side effects…

@ghost
Copy link

ghost commented Mar 30, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@hashicorp hashicorp locked and limited conversation to collaborators Mar 30, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

9 participants