Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow running provisioner on existing resource #745

Closed
teancom opened this issue Jan 6, 2015 · 21 comments
Closed

Allow running provisioner on existing resource #745

teancom opened this issue Jan 6, 2015 · 21 comments
Labels
waiting-response An issue/pull request is waiting for a response from the community

Comments

@teancom
Copy link

teancom commented Jan 6, 2015

There is currently no way to run a provisioning script on an existing resource. Adding provisioner sections to an existing (already provisioned) aws_instance is not something that terraform notices as a 'change', so the provisioner is not run during the next apply. The only way to run the provisioner is to destroy the instance and let terraform create it again, which may be non-optimal.

@mitchellh
Copy link
Contributor

May I ask what your use case is for this?

Provisioners are meant as a way to bootstrap nodes. We made the design decision early on to not support update provisioners because it is rather complicated (what causes a "diff" in the provisioner? is it idempotent to re-run?).

@mitchellh mitchellh added the waiting-response An issue/pull request is waiting for a response from the community label Jan 6, 2015
@mitchellh
Copy link
Contributor

To add to this: we saw no issue in not supporting this because Terraform is meant to create/destroy infrastructure components. The runtime management of these components should be the responsibility of Chef, Consul, etc.

@teancom
Copy link
Author

teancom commented Jan 13, 2015

We've run into multiple cases where the provisioning script either doesn't run or doesn't run successfully, and/or we have existing machines that we want to run the 'bootstrap' on (which is heavily tied into terraform, using a bunch of variables that are pulled out of the terraform config). However, if the answer is "we don't want to support that", then that's the answer and we'll do the blow away/recreate/etc. dance 😄

@mitchellh
Copy link
Contributor

@teancom So, if it doesn't run or doesn't run successfully, Terraform should mark the resource as "tainted" and automatically destroy/create on the next run. Are you not seeing this?

@colorfulgrayscale
Copy link

I have a relevant question.

My provisioner (Ansible) pulls the latest code base and sets up my production environment. When I deploy new code, I just run my ansible script to refresh the prod servers.

How will that workflow fit in with terraform? would terraform apply re-run the provisioner for already provisioned servers?

@kubek2k
Copy link
Contributor

kubek2k commented Feb 23, 2015

would terraform apply re-run the provisioner for already provisioned servers?

yes - as long as your state file is in-sync with provisioned servers

@mitchellh
Copy link
Contributor

Closing this as I don't see an issue here. We also added terraform taint for forcing a recreate which will also force provisioners to re-run if that is the behvaior you want.

@kubek2k
Copy link
Contributor

kubek2k commented Mar 2, 2015

@mitchellh what if you have an existing infrastructure that can't stop running ? Or what if existing infrastructure contains properties which are not terraform managed, and has to be manually reset after recreation?

@mitchellh
Copy link
Contributor

@kubek2k In those scenarios you'll have to run provisioners manually outside of TF.

@kubek2k
Copy link
Contributor

kubek2k commented Mar 2, 2015

@mitchellh wondering if it doesn't hammer the adoption of terraform for many

@aspring
Copy link

aspring commented Jul 16, 2015

@mitchellh I am wondering if this is worth re-evaluating with the addition of the chef provisioner as it addresses the idempotency concern above.

Reason I ask is our use case:

We are migrating away from IronFan (which manages infrastructure lifecycle as well as a very tight coupling with Chef), and are looking to have Terraform be the replacement with its much looser coupling to its provisioners, namely Chef in our case, but still allow us to use a single source of truth to determine what is running on our infrastructure.

We are not able to achieve this single source of truth with the provisioners only running at creation, and looking at the tooling necessary to create a single source of truth that feeds Terraform at beginning of life, and then manages the config until end of life seems like a lot of unnecessary moving parts.

Would it be possible to entertain an opt-in type flag for the Chef provisioner that would allow it to run in some if certain attributes of the provisioner changed? Oris there another alternative/project that is available that anyone may know about?

@nordringrayhide
Copy link

+1 it makes sense to have alternative behaviour. I'm really wondered that fact why I have to reapply existing node if I've just changed my cookbook and need to apply it to the node. Actually you don't need make diff cookbook resources have to.

@paulcdejean
Copy link

-1 the implementation would be fundamentally insecure.

It's very common that you don't want the thingy that created your arecture to have root access to it.

@woodhull
Copy link

@mitchellh here's our use case:

We use an AWS launch configuration with an auto scaling group. We have terraform setup to always create a new launch configuration and then update the auto scaling group.

On new launch config create, we'd like to run a script we've written to scale up the ASG to bring the new launch configuration into service, and then scale it back down again to eliminate the old instances running based on the previous launch configuration.

Attaching the provisioner to the launch configuration doesn't work since the ASG hasn't been updated yet in the plan apply. It would be neat to attach the script we have to changes of the auto scaling group launch configuration policy... but no way to do that at the moment.

@plainlystated
Copy link

I have a similar usecase:

I am doing something like:

resource "aws_instance" "app" {
  ... 
 provisioner "chef" {
    ...
  }
}

resource "aws_network_interface" "app-data" {
  attachment { instance = "${aws_instance.app.id}" }
  ...
}

The network_interface I'm creating gives the host access to a subnet that chef will need (or else it fails). I cannot figure out how to allow the aws_network_interface to be applied before the instance provisions.

@desmondmorris
Copy link

@woodhull did you ever figure out a solution for this? I suppose you could use userdata for this

@woodhull
Copy link

woodhull commented Feb 8, 2016

We wrap all of terraform execution in a custom ruby script that does this and many other tasks before and after every terraform run completes.

@cohenaj194
Copy link

You can see the solution to this issue here:

http://stackoverflow.com/questions/37865979/terraform-how-to-run-the-provisioner-on-existing-resources

And also here:

http://stackoverflow.com/questions/37823770/terraform-stalls-while-trying-to-get-ip-addresses-of-multiple-instances

To run commands on resources that have already been created you need to create a resource "null_resource" "nameYouWant" { } block and run your commands inside of that for example:

resource "aws_instance" "consul" {
  count = 3
  ami = "ami-ce5a9fa3"
  instance_type = "t2.micro"
  key_name = "ansible_aws"
  tags {
    Name = "consul"
  }
}

resource "null_resource" "configure-consul-ips" {
  count = 3

  connection {
    user = "ubuntu"
    private_key="${file("/home/ubuntu/.ssh/id_rsa")}"
    agent = true
    timeout = "3m"
  }

  provisioner "remote-exec" {
    inline = [
      "sudo apt-get update",
      "sudo apt-get install -y curl",
      "sudo echo '${join("\n", aws_instance.consul.*.private_ip)}' > /home/ubuntu/test.txt"
    ]
  }
}

A big thanks to @ydaetskcor for the solution. http://stackoverflow.com/users/2291321/ydaetskcor

@mlushpenko
Copy link

@cohenaj194, I am using the same solution. Basically, I use null_resource quite several times with ansible + terraform setup, whenever I need to copy something, create dynamic inventory for ansible, etc.

The issue for me is to rerun null_resources on changes. For instance, I have a step to add EC2 instance IPs to the group in inventory file:

# Add web hosts to the web group
resource "null_resource" "ansible_inventory_hosts" {
  count = "${var.aws_instances_count}"

  provisioner "local-exec" {
    command = "sed -i '/\\[${element(aws_instance.web.*.tags.group, count.index)}\\]/ a ${element(aws_instance.web.*.private_ip, count.index)}' ansible/inventory"
  }

  depends_on = ["null_resource.ansible_inventory_groups"]
}

So, whenever I change the number of instances, my inventory will be updated. Now I need to copy this new inventory to the ansible "master" host, so I use null_resource again:

resource "null_resource" "ansible_copy" {
  provisioner "file" {
    source = "ansible/"
    destination = "/home/ubuntu"
  }

  connection {
    user        = "ubuntu"
    private_key = "key.pem"
    host                = "${aws_instance.ansible.public_ip}"
  }

  depends_on = ["null_resource.ansible_inventory_hosts"]
}

but it doesn't appear during the plan phase. One solution I found is to monitor number of instances by adding trigger to null_resource:

    triggers = {
        copy_files_on_inventory_change = "${var.aws_instances_count}" 
    }

But I also have ansible files themselves, that are not related to the infrastructure changes, I mean role definitions. Do you have idea how to track local files changes, how do I trigger null_resource if I updated my role or how do I make the resource to run always?

@mitchellh, considering this case - do you think it would make sense to add something like "tainted: always" option to null_resources? I am new to Terraform, but I feel like the connection between Terraform and configuration management tooling is missing a bit.

@cornfeedhobo
Copy link

@mlushpenko this is a pretty old issue. you'll probably have better luck opening another one that is specific to null_resource

@ghost
Copy link

ghost commented Apr 23, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 23, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
waiting-response An issue/pull request is waiting for a response from the community
Projects
None yet
Development

No branches or pull requests