Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support the count parameter for modules #953

Closed
markmartirosian opened this issue Feb 9, 2015 · 46 comments
Closed

Support the count parameter for modules #953

markmartirosian opened this issue Feb 9, 2015 · 46 comments

Comments

@markmartirosian
Copy link

@markmartirosian markmartirosian commented Feb 9, 2015

First of all I wanted to thank all the contributors and especially Hashicorp for doing a great work. Thank you!

Currently the terraform modules do not support the count parameter. It would be a nice addition, encouraging a more DRY approach to writing modules.

Errors:
   * module root: module somemodule: count is not a valid parameter
@markmartirosian markmartirosian changed the title Support the count parameter for modules. Support the count parameter for modules Feb 9, 2015
@mitchellh
Copy link
Member

@mitchellh mitchellh commented Feb 17, 2015

Can you please share a use case for this?

@mitchellh
Copy link
Member

@mitchellh mitchellh commented Mar 2, 2015

Closing due to lack of response.

@mitchellh mitchellh closed this Mar 2, 2015
@blakeneyops
Copy link

@blakeneyops blakeneyops commented Mar 18, 2015

I have a use case where this would be useful.

I am working on some templates to generically model an AWS VPC. I have created a module to model my default AZ configuration. As various regions support different numbers of AZs, I want the ability to specify the number of AZs to be provisioned for any given set of input parameters.

Example input parameters would be as follows:

variable "aws_vpc_tag" {
    default= "prod"
}
variable "vpc_cidr_block" {
    default = "10.120.0.0/22"
}
variable "region" {
    default = "us-east-1"
}
variable "az_count" {
    default = "4"
}
variable "az" {
    default = {
        "0" = "a"
        "1" = "b"
        "2" = "d"
        "3" = "e"
    }
}
variable "dmz_cidr_block" {
    default = {
        "0" = "10.120.0.0/25"
        "1" = "10.120.0.128/25"
        "2" = "10.120.1.0/25"
        "3" = "10.120.1.128/25"
    }
}
variable "lan_cidr_block" {
    default = {
        "0" = "10.120.2.0/25"
        "1" = "10.120.2.128/25"
        "2" = "10.120.3.0/25"
        "3" = "10.120.3.128/25"
    }
}
variable "instance_type" {}
variable "nat_ami" {}
variable "key_name" {}

The reason for the AZ map is that I have found that different AWS accounts will actually report different AZs as being VPC ready.

What I planned to do was the following:

module "az_conf" {
    source = "../templates/aws/vpc/az"
    count = "${var.az_count}"

    # Resource tags
    vpc_tag = "${var.aws_vpc_tag}"
    count_tag = "${count.index}"

    # VPC parameters
    vpc_id = "${module.vpc_conf.vpc_id}"
    igw_id = "${module.vpc_conf.igw_id}"

    region = "${var.region}"
    az = "${lookup(var.az, count.index)}"

    dmz_cidr_block = "${lookup(var.dmz_cidr_block, count.index)}"
    lan_cidr_block = "${lookup(var.lan_cidr_block, count.index)}"

    # EC2 parameters
    instance_type = "${var.instance_type}"
    nat_ami = "${var.nat_ami}"
    key_name = "${var.key_name}"
}

What I had to do instead for each AZ is more along the lines of:

module "az0" {
    source = "../templates/aws/vpc/az"

    # Resource tags
    vpc_tag = "${var.aws_vpc_tag}"
    count_tag = "0"

    # VPC parameters
    vpc_id = "${module.vpc_conf.vpc_id}"
    igw_id = "${module.vpc_conf.igw_id}"

    region = "${var.region}"
    az = "${lookup(var.az, 0)}"

    dmz_cidr_block = "${lookup(var.dmz_cidr_block, 0)}"
    lan_cidr_block = "${lookup(var.lan_cidr_block, 0)}"

    # EC2 parameters
    instance_type = "${var.instance_type}"
    nat_ami = "${var.nat_ami}"
    key_name = "${var.key_name}"
}

The alternative I reviewed was using count within the module itself, but it did not appear that there was a way to pass maps in as module input and I want to keep the modules generic with use case specific data in the calling template.

Let me know if you need any additional information or have any questions. Thanks.

@rcostanzo
Copy link

@rcostanzo rcostanzo commented Mar 19, 2015

I hit this same problem today too. Not sure it's really related to this issue's title, so maybe move to its own. But I also would love to be able to pass a mapping into a module.

@johnrengelman
Copy link
Contributor

@johnrengelman johnrengelman commented Apr 10, 2015

I have the same use case for wanting to use count on a module. Basically creating the network topology and I want to wrap that up in a module and then iterate over AZs.

@jwthomp
Copy link

@jwthomp jwthomp commented May 15, 2015

I am also hitting this as an issue. The use case is for using the docker provider to launch swarm agents across a list of nodes. I have put the swarm agent setup in a module to allow for being able to dynamically set up the docker provider based on a list of nodes.

module "agent" {
  source           = "./agent"
  docker_ip        = "${lookup(var.docker_ip, count.index)}"
  docker_port      = "${lookup(var.docker_port, count.index)}"
  swarm_version    = "${var.swarm_version}"
  swarm_token      = "${var.swarm_token}"
  count            = 3
}

@jjshoe
Copy link

@jjshoe jjshoe commented May 20, 2015

@maartensl
Copy link

@maartensl maartensl commented May 28, 2015

This is definitely useful.
I'm hitting a case where I have one core resource being used across every module. So in this case we have a resource encapsulated in a module, and the model then gets called numerous times to create the necessary resource (a VM) with variables to adjust memory, cpu, IP etc.

For scaling purposes we want to invoke the module in exactly the way explained here:
http://www.terraform.io/docs/configuration/resources.html#using-variables-with-count
Since the resource is encapsulated by the module, it wouldn't make sense to add the count at the resource level - it would break our abstraction model.

@phinze
Copy link
Member

@phinze phinze commented Jul 15, 2015

Lots of decent use cases reported here - reopening and we'll get this supported.

@kisamoto
Copy link

@kisamoto kisamoto commented Nov 26, 2015

Use case is multiple asg modules, auto configured using puppet based on a sequence of variables passed through. At the moment I have to have a module reference for each asg, 90% of the variables the same copy -> paste just modifying the 2 or 3 that defines the type of box.

If I could use a count lookup instead (e.g. below) that would be awesome.

main.tf

module "server" {
  source = "./server"
  count = "${var.servers.count}"

  constA = "some default"
  constB = "another default"
  varA = "${lookup(var.servers, count.index)}"
}

variables.tf

variable "servers" {
  description = "provides lookup of what the server should be configured as"
  default = {
    "count" = 3
    "0" = "web"
    "1" = "db"
    "2" = "lb"
  }
}

@kbxkb
Copy link

@kbxkb kbxkb commented Nov 30, 2015

Another use case for Azure: When creating multiple Azure VM-s belonging to the same Cloud Service (which can be done is we resolve this bug: #3568, and I am in the process of resolving it), if we also want to add data disks to each of these VM-s (which is possible is this bug is fixed: #3428, and I am in the process of fixing it) - currently, the addition of data disks has to be a serial operation, gated by each other because Azure lets a client have a cloud-service-level lock on all resources when adding a data disk.

Hence, the loop needs to look like this:

create a cloud service
loop {
add a VM to the above cloud service
add a data disk to the VM
}

Hence, the need to loop over multiple resources.

Hence, the need to loop over a module.

Thanks!

@mitchellh
Copy link
Member

@mitchellh mitchellh commented Mar 21, 2016

I started working on this feature, and we do have it planned. We reprioritized it lower though when I realized that the internals of Terraform need to be refactored quite a bit to support this properly. On the surface it seems like a simple "just duplicate it N times" sort of thing, but internally its quite a bit more complex since we'd have to support multiple variables and outputs. And that is the main kicker from a core perspective: supporting module.foo[0].output is hard, and configuring the graph so that var.foo within a module points to the proper instance is hard.

We're going to do this, certainly, but we had a number of plans for 0.7 and we reprioritized the rest above this. Namely: better state management CLIs. I'm working on those now.

@clintonm9
Copy link

@clintonm9 clintonm9 commented May 10, 2016

Would like it to work with AzureRM. We use a powershell script now that we can pass how many datadisk we would like:

 storage_data_disk {
    count          = 2
    name           = "mydatadisk${count.index}"
    vhd_uri        = "${azurerm_storage_account.test.primary_blob_endpoint}${azurerm_storage_container.data.name}/mydatadisk${count.index}.vhd"
    disk_size_gb   = 1023
    create_option  = "Empty"
    lun            = ${count.index}
  }

@blakesmith
Copy link
Contributor

@blakesmith blakesmith commented Jun 9, 2016

We're seriously contemplating writing a template generation step to our TF build in order to DRY this code up. I liken this to the same need you have in Chef code to writing your Custom Providers. Our use case is to encapsulate our server building standards, especially because I don't want every single one of our teams to have to think about subnet configurations, AMI ids (we don't use immutable deploys with Packer), VM sizing standards, or Chef bootstrapping. We basically need to DRY up this:

resource "aws_instance" "sprout-server" {
    ami = "${lookup(var.aws_amis, var.default_os)}"
    instance_type = "${lookup("${var.ec2_types}", "${var.ds_ec2_instance_type}")}"
    tags = {
        Name = "${var.instance_name}"
    }
    subnet_id = "${aws_subnet.ds_subnet_1a_private.id}"
    key_name = "${var.ssh_key_name}"
    vpc_security_group_ids =  [ "${aws_security_group.ds_secgroup_default.id}" ]
    connection {
      user = "${var.ssh_user}"
      private_key = "${file("${var.ssh_priv_key_path}")}"
    }
    # Bootstrap 
    provisioner "chef" {
      environment = "${lookup(var.chef_server, "environment")}"
      run_list = ["recipe[bootstrap]", "recipe[first-run]", "role[base-aws]"]
      node_name = "${var.instance_name}"
      ohai_hints = [ "lib/ec2.json" ]
      secret_key = "${file("${lookup(var.chef_server, "secret_key_path")}")}"
      server_url = "${lookup(var.chef_server, "url")}"
      validation_client_name = "${lookup(var.chef_server, "validation_client_name")}"
      validation_key = "${file("${lookup(var.chef_server, "validation_key_path")}")}"
      version = "${lookup(var.chef_server, "version")}"
    }
    # Remove bootstrap recipe
    provisioner "local-exec" {
      command = "knife node run_list remove ${var.instance_name} \"recipe[bootstrap],recipe[sprout_toolbox::check_network]\""
    }
    # Add iptables entry, DNS, reverse DNS and send email notifications
    provisioner "local-exec" {
      command = "./lib/post_bootstrap --name ${var.instance_name} --ip_addr ${aws_instance.ds-vm01.private_ip}"
    }

}

And encapsulate as much boilerplate as we can, and wrap it behind something that looks like a top-level resource itself:

Either:

module "sprout_aws_server" "some_server_name" {
  name = "web-server"
  region = "us-east-1"
}

Or something that looks like a resource itself directly:

resource "sprout_aws_server" "some_server_name" {
  name = "web-server"
  region = "us-east-1"
}

I liken this more to a function call than perhaps a module, but we attempted to abuse modules to encapsulate this. Even considered writing our own custom resource that just calls other resources, but that requires a whole level of development effort that doesn't seem worth it in most cases.

@blakesmith
Copy link
Contributor

@blakesmith blakesmith commented Jun 10, 2016

On my previous comment, we're going to follow some of the module patterns laid out in this blog post to DRY up our server resources

@simple-guy
Copy link

@simple-guy simple-guy commented Jun 10, 2016

My user case is creation of VPC across regions with code like this:

variable "regions"      { default = "us-west-1,ap-southeast-2" }

module "vpc" {
    source = "./vpc"
    count = "${length(split(",", var.regions))}"
    region = "${element(split(",", var.regions), count.index)}"
    [..]
}

@serdardalgic
Copy link

@serdardalgic serdardalgic commented Jun 17, 2016

We also have a similar usecase, we want to create a Mongo Replica Set, and for each replica set, two different modules are being called: arbiter module(once) and mongod module(twice), exactly in the same way, just with different numbers. We want to have 3 Replica Sets, thus what we need is:

Calling the replica set module 3 times
In each replica set, calling the mongod module 2 times.

So, another +1

@ndimiduk
Copy link

@ndimiduk ndimiduk commented Jun 17, 2016

I'd like to encapsulate into a module the creation of a host along with forward and reverse dns entries for it. I'd like to create N of these hosts. With count being per-resource, I cannot reference instance i in order to grab its private IP for forward/reverse record i.

+1

@apparentlymart
Copy link
Member

@apparentlymart apparentlymart commented Sep 9, 2016

The excitement is totally warranted and appreciated here folks, but know that this is already known to be a very useful feature and it is the implementation effort rather than the lack of excitement that are holding it back, so further votes and plus ones are not necessary. 😀

@dunctait
Copy link

@dunctait dunctait commented Jan 11, 2017

I am also looking to pass a variable into a module with the number of storage disk I want to create on an Azure RM VM, as well as a list with the size of the storage disks, like so:

storage_data_disk {  
  count         = "${var.storage_disk_count}"  
  name          = "datadisk${count.index}"  
  vhd_uri       = "${var.vhd_uri}/datadisk${count.index}.vhd"  
  disk_size_gb  = "${element(var.storage_disk_gb_sizes, count.index)}"  
  create_option = "empty"  
  lun           = 0  
}

@LHCGreg
Copy link

@LHCGreg LHCGreg commented Feb 2, 2017

For my use case I only need to support count 0 or 1. I would like to be able to bundle a bunch of related resources into a module, for example an RDS instance, an EC2 instance, and related security groups. Then I would like to have the option of turning off the entire bundle based on environment, to save costs on the staging environment. My options right now are to use a custom_count parameter like @jonatanblue said and stick it on every resource in the module, or to use a preprocessor.

@mootpt
Copy link
Contributor

@mootpt mootpt commented Feb 7, 2017

having count will also allow one to work around #1439 in certain cases. For instance in my case I want to pass in a environment to a module and have it use the submodule with the correct path based on the value:

variable "environment" {
  default = "preprod"
}

variable "region" {
  default = "us-west-2"
}

module "prod" {
  count  = "${var.environment == "prod" ? 1 : 0}" 
  source = "prod"
  region = "${var.region}"
}

module "preprod" {
  count  = "${var.environment == "preprod" ? 1 : 0}"
  source = "preprod"
  region = "${var.region}"
}

@roeera
Copy link

@roeera roeera commented Mar 20, 2017

Can someone from terraform team could update us ? it's nice that we request for new feature (that in my perspective is a must for modules as currently modules are totally missed the target). But how can we make sure that terraform are really take this in consideration.
Appreciate for comment.
Roee.

@mitchellh
Copy link
Member

@mitchellh mitchellh commented Mar 20, 2017

Hey @roeera, indeed! We want to build this and simply haven't had the time. All the core work to support this is now in place, so it'd be much easier (still not easy, but easier) for someone to come along and work on this. As a core team we haven't had the time to make this happen, since there are lots of issues like this where folks really want something. :) When we pick one, another gets inevitably left out.

Its still something we'd like to do. We'll either get to it in time or we'd love to see someone from the community approach it as well.

@mitchellh
Copy link
Member

@mitchellh mitchellh commented Mar 20, 2017

A time estimate would of course be person-dependent. In terms of the type of work: it is fairly significant core work, so for someone unfamiliar with TF core it would be difficult. I would recommend fixing a simpler bug+core or enhancement+core issue first before tackling this. In particular, adding module counts would require modifying the config package (easy), updating the graph generation (steep learning curve), potentially update graph evaluation (steep learning curve).

When I last looked at it, the major complications actually came from variables/outputs and not from duplicating the resources themselves. The last time I took a look was almost a year ago though at this point.

@sysadmiral
Copy link

@sysadmiral sysadmiral commented Apr 12, 2017

I know it's been running a while but going back to @mitchellh comment (#953 (comment)) I thought I would share my use case/desired use case.

I would like to be able to do this:

module "foo" {
  source = "bar"
  count   = "${var.include_module ? 1 : 0}"
}

I will add that as a workaround I can do that within the module by adding a conditional count for every resource that allows count but this can leave things lingering around if count is not supported. It works fine for very simple modules though.

Something I am writing a module for now is a three tiered app: frontend > elasticache > api.

It would be nice to be able to add/remove the elasticache layer conditionally by setting a var to true.

@hamstah
Copy link

@hamstah hamstah commented Apr 12, 2017

Hi,

Another use case I have is for bootstraping a new account from scratch. I have a module for different clients with a full environment set up into it.

Each account needs to be identical so they all use the same module with a different configuration.

My problem is that the module has dependencies on resources that don't exist yet, like AMIs, but I need to create a VPC and security group first to be able to use packer to generate the AMIs.

The solutions I have are

Solution 1

  • Have the parts that are dependent on AMIs in a different module
  • Add the first module with only VPC/IAM in my client folder, run terraform
  • Run packer to generate the AMIs
  • Now add the other modules with the resources depending on AMIs and run again

Problem

  • I do not want my top level client folders to be a combination of many modules as I have now to make sure they all stay in sync.

or

Solution 2

  • First run with dummy AMIs
  • Generate AMIs with packer
  • Replace the AMIs and re-run

Problem

  • This is wasteful if instances with the dummy AMIs are actually launched
  • Only works for AMIs

Now if I had a way to conditionally include modules (using count) I could have a common client module with each submodule with a variable count (defaulting to 1). Then bootstraping is much simpler, I just do

# client module

variable "with_amis" {
  default = 1
}

module "submodule_using_amis" {
   count = "${var.with_amis ? 1: 0}"
   ....
}


# for each client
module "clienta" {
  source = "modules/client-account"
  with_amis =  0
}

Apply, then run packer, then just remove the with_amis = 0 line and apply again.
Once done with bootstrapping every client folder is using the same module and is in sync.

@dohoangkhiem
Copy link

@dohoangkhiem dohoangkhiem commented Apr 21, 2017

I have this use case

resource "aws_db_instance" "mydb_instance" {
  allocated_storage    = "${var.storage}"
  storage_type         = "${var.storage_type}"
  engine               = "${var.engine}"
  engine_version       = "${var.engine_version}"
  instance_class       = "${var.db_instance_type}"
  identifier           = "${var.db_instance_name}"
  db_subnet_group_name = "${var.db_subnet_group_name}"
  parameter_group_name = "${var.parameter_group_name}"
  username             = "${var.username}"
  password             = "${var.password}"
  name                 = "mydb1"

  skip_final_snapshot  = true
}

provider "postgresql" {
  alias           = "mydb_postgres"
  host            = "${aws_db_instance.mydb_instance.address}"
  port            = 5432
  username        = "${var.username}"
  password        = "${var.password}"
  connect_timeout = 15
}

resource "postgresql_database" "mydb2" {
  provider = "postgresql.mydb_postgres"
  name = "mydb2"
}

I would have multiple db instances like above, if I use count in aws_db_instance then I can't use it with the provider to alias each provider or to set correct host address using count.index. Any suggestions for this would be appreciated.

@scriptjs
Copy link

@scriptjs scriptjs commented May 13, 2017

I tried creating a count without modifications to terraform and it seems to work. I created module to enable creation of instances for different tenants and regions in OpenStack. Rather than using count, I created a custom count variable called instances_count. I use that within my module as the value I provide to count to create multiple instances within the module. ie.

count = "${var.instance_count}"

So I am not certain terraform needs anything special to accomplish this.

To create and associate block devices, security groups, floating ip, and adding dns records, I am using count with the the existing element or list syntax from with the * splat together a naming scheme to match the resources by key to the incremented value of count for the resources. It seems to work fine for plan, apply and destroy.

Just thought I would pass this on as it seems to work and might be a reasonable work around.

@naftulikay
Copy link

@naftulikay naftulikay commented May 24, 2017

I am attempting to build a Redis Cluster series of modules redis_cluster and redis_cluster_shard. My entire design hinged on the ability to do this:

module "redis_cluster_shard" {
  count = "${var.primary_count}"

  source = "../redis_cluster_shard"

  # parameters
  shard_id           = "${count.index + 1}"
  replica_count      = "${var.replica_count_per_master}"
  primary_subnet_id  = "${var.subnet_ids[count.index % length(var.subnet_ids)]}"
  # start replicas in same subnet and then next subnets in order
  replica_subnet_ids = ["${concat(
    slice(var.subnet_ids, count.index % length(var.subnet_ids), length(var.subnet_ids)),
    slice(var.subnet_ids, 0, count.index % length(var.subnet_ids))
  )]}"
}

In this way, I could spin up and down an arbitrary amount of primaries with an arbitrary amount of replicas each. Lacking this count parameter support on modules completely derails this work and I've lost a day and the time it will take me to revert all this work.

@igoratencompass
Copy link

@igoratencompass igoratencompass commented May 24, 2017

As pointed by sysadmiral above, just the support of the conditional include/exclude of a module:

module "foo" {
  source = "bar"
  count   = "${var.include_module ? 1 : 0}"
}

in a main tf file would be a great feature to have.

@cemo
Copy link

@cemo cemo commented May 26, 2017

@mitchellh any news on this for 0.10 :-) You are quite for a long time and our expectations are growing :-)

@apparentlymart
Copy link
Member

@apparentlymart apparentlymart commented May 26, 2017

I'm not Mitchell but I think we can say with some certainty that this will not be in 0.10, since prototyping/design work has barely begun. This is a big change. You can see in #13855 that @Pryz was working through some different approaches and indeed it remains unclear what is the best approach at this point.

This is a pretty key thing to making modules truly reusable though, so it's still on the radar.

@spa-87
Copy link

@spa-87 spa-87 commented Jun 8, 2017

Hello, our use case where this feature would help:
We have a multi-region AWS-based infrastructure. In every such region configuration is pretty much the same.
We have one module (with submodules) for managing configuration for those regions. But now we have to call this module with all required parameters 9 times.
Possibility to use count-based loop for modules would help us to reduce size of root.tf file 8 times!

@polliard-jmfe
Copy link

@polliard-jmfe polliard-jmfe commented Jun 9, 2017

Similar use case to the one specfified above. Need it for conditional inclusion of a module in the calling tf.
module "foo" { source = "bar" count = "${var.include_module ? 1 : 0}" }

@derBroBro
Copy link
Contributor

@derBroBro derBroBro commented Jun 14, 2017

We also need it for lage scale account privisioning:
Code will be look similar to the following example:

module "account"{
  source = "account"
  name   = "${var.name[count.index]}"
  id     = "${var.id[count.index]}"
  vpc    = "${var.vpc[count.index]}"
  ...
  count  = "${length(var.name)}"
}

@jtopper
Copy link
Contributor

@jtopper jtopper commented Jun 19, 2017

This is a key missing feature for us at this point too. It even came up during a presentation at Hashidays London (hai @nickithewatt). Is there any view on when this might make it onto a priority list? Appreciate that this is a serious lump of work, but at this stage making the language more expressive and improving re-use is really important to us.

@talkdirty
Copy link

@talkdirty talkdirty commented Jun 26, 2017

Just to bump this issue: I just began using terraform and ran into this almost immediately. I think not having this feature handicaps a lot of cool use cases. I'd love for this feature to be in the next release!

@apparentlymart
Copy link
Member

@apparentlymart apparentlymart commented Jun 26, 2017

Hi all,

Thanks for the great discussion here, and sorry for the apparent lack of movement. As others have guessed, this is a pretty radical change to Terraform's internals so we've prototyped a few approaches so far and haven't yet landed on something that works well. It is something we want to solve, but we need to figure out how to solve it with as little impact as possible to other features.

I appreciate everyone sharing their use-cases. This is always helpful making design tradeoffs. I'm going to close this discussion now just because it seems like we have a good coverage of use-cases and there are lots of people following this issue so ongoing commentary is creating noise for them. We'll share more info here when we have it. In the mean time, I'm afraid I need to ask for everyone's patience while we figure this out.

@hashicorp hashicorp locked and limited conversation to collaborators Jun 26, 2017
@apparentlymart apparentlymart added config and removed core labels Aug 1, 2018
@apparentlymart
Copy link
Member

@apparentlymart apparentlymart commented Jan 24, 2020

Hi all,

This older issue only covered count because for_each hasn't been proposed yet when it was opened, but count and for_each on modules boil down to the same root problem so in order to consolidate the discussion and allow us to more efficiently share updates I'm going to close this issue out in favor of #17519.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet