Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for multiple tags and host labels for cloud auto-join #6946

Open
tommyalatalo opened this issue Dec 13, 2019 · 5 comments
Open
Labels
type/enhancement Proposed improvement or new feature

Comments

@tommyalatalo
Copy link

Feature Description

The cloud auto-join feature would really benefit from supporting setting multiple tags and host labels to allow for more fine-grained filtering of the nodes to auto-join to.

Use Case(s)

Lets say I have multiple clusters (A,B,C) deployed in the same GCE project, all the consul servers in those clusters are tagged with "auto-join" to easily targeting them with the retry_join setting.

However the problem now is that if I try to auto-join using

"retry_join": ["provider=gce project_name=myProject tag_value=auto-join"]

I will get all the nodes from clusters A,B and C, since all the servers have the tag, but I only want nodes from one of them.

Proposed solution
Support for setting filter for host labels multiple tags :

"retry_join": ["provider=gce project_name=myProject tag_value=auto-join tag_value=uniqueTag label=uniqueLabel"]

This would enable filtering out nodes among multiple clusters that have the same 'auto-join' tag.

@crhino crhino added the type/enhancement Proposed improvement or new feature label Dec 17, 2019
@crhino
Copy link
Contributor

crhino commented Dec 17, 2019

Hi @tommyalatalo, that seems like a reasonable feature request. In terms of workarounds right now, is possible to set the tag_value to something like the concatenation of auto-join-uniqueTag?

@tommyalatalo
Copy link
Author

tommyalatalo commented Dec 19, 2019

Hi @tommyalatalo, that seems like a reasonable feature request. In terms of workarounds right now, is possible to set the tag_value to something like the concatenation of auto-join-uniqueTag?

In theory yes, its a workaround, but it requires setting of unique auto-join tags in every individual terraform manifest I use, so it's not really ideal, but workable for now.

@igoratencompass
Copy link

igoratencompass commented Apr 24, 2020

I also find this feature useful, for sure it is good to narrow down the discovery radius and speed up the server join. In my case the discovery returns dozens of consul instances in the same region (when no unique tag is set so without planning for this feature) that are not even in the same VPC. I have consul cluster per VPC and each VPC has only private subnets so the cross VPC comm is not possible but still ...

I would like to be able to set something like:

tag_key=key1 tag_value=val1 tag_key=key2 tag_value=val2

Alternatively refactoring the logic to something like awscli does it for example:

tag:key=key1,value=val1 tag:key=key2,value=val2

might be better option.

Ah, and another thing that really bothers me ... why was this feature not extended for the agent in client mode? These days everything is dynamic so if I roll over the server instances a new cluster will form and all my clients are basically broken 😞

@neurostream
Copy link

neurostream commented Nov 3, 2020

Many of the shops I work with stripe Consul environments across regions by VPC, since they aren't in a place where they can automate the creation of separate AWS org/accounts for each environment ( environment as in dev, test, stage, prod, etc. - or a set of nodes in the same .consul RAFT/SWIM domain ). In these cases, the scope of ec2:DescribeInstances in one region inadvertently allows, for example, nodes from EnvironmentA to discover and (attempt to) join EnvironmentB - given a simple auto-join tag scheme like "NodeRole=ConsulServer"; so it would be helpful to match to a second tag ( or list of tags ) like "EnvironmentName=EnvironmentA".

In the mean time, if it does matter if that the joining nodes don't know what environment they're in, one could try to IAM Policy -limit the ec2:DescribeInstances resource scope by vpc ( like arn:aws:ec2:region:account-id:vpc/vpc-id-for-environment ) - but that would mean you create a new policy for each environment. if it does not matter if that the joining nodes don't know what environment they're in, then one could try concatenating the unique thing to keep separate ( like an EnvironmentName in this example ) with whatever the tag value for identifying the Consul servers in the environment ( as @crhino suggested as a workaround above ) - but that would mean you create a new tag value for each consul server environment in that region.

Curious if this enhancement label has inched this issue off the back-burner at all? Would love to see this.

@dnephin
Copy link
Contributor

dnephin commented Nov 4, 2020

Thank you to everyone who has commented on this issue and shared their use case. I wrote up issue #9100 and I was wondering if such a solution might work for you.

If you were able to configure retry_join with a value of exec=/usr/local/bin/discovery-script, and in that script use the AWS or GCP CLIs (or maybe even curl) to perform a custom query for addresses, would that solve the problem for your use case?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/enhancement Proposed improvement or new feature
Projects
None yet
Development

No branches or pull requests

5 participants