Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for Consul KV #474

Closed
ChrisMcKenzie opened this issue Nov 20, 2015 · 19 comments
Closed

Support for Consul KV #474

ChrisMcKenzie opened this issue Nov 20, 2015 · 19 comments

Comments

@ChrisMcKenzie
Copy link

Perhaps this is already on the roadmap but it would nice to be able to use values from consul as metadata for a job as well as env variables this way configuration and such can be updated dynamically causing a restart of a job?

perhaps allowing for something like this in the env section:

env {
  MY_CONFIG_OPTION="{{consul "myapp/config/option"}}"
}

This would tell nomad to fill that env variable in with the value from the consul path given.
It would also be nice if after allocation it would watch that value for changes and restart the job accordingly.

Please let me know if you think this is a good idea or should I be composing tool on top of nomad (consul-template) to accomplish this.

@cbednarski
Copy link
Contributor

Thanks for the suggestion. We are planning to add this. We have not yet done so because we don't yet have a way to provide security or access control around the K/V store. Since all clients appear to be Nomad, Nomad needs some idea of who owns which job and what permissions they should have.

@ChrisMcKenzie
Copy link
Author

Cool, yeah I kind of started working on this a little bit using the consul client available in the task_runner I might look in to the sec and acl stuff as my implementation does not take that in to account.

@cbednarski
Copy link
Contributor

Robust security features are pretty complicated and still a ways out but I think this is likely useful in the mean time. 👍

@fernandezvara
Copy link

I would like to help on that too. Allowing to add consul KV access to the tasks is a nice addition. But I don't know if the implementation must allow to set variables like terraform definition make, since it can be nice to be able to iterate on the keys if array.

Also it must set and get on KV. If there are already a design document for this feature I could like to join and help there.

Something like consul terraform provider is already battle tested: (copying from the tf documentation)

job "aaa" {
 ...
    task "bbb" {

    provider "consul" {
        address = "demo.consul.io:80"
        datacenter = "nyc1"
    }

    resource "consul_keys" "app" {
        token = "xxxxxxxxxxxxxxxxxx"
        key {
            name = "ami"
            path = "service/app/launch_ami"
            default = "ami-1234"
        }
    }

    # Use our variable from Consul
    env {
        ami = "${consul_keys.app.var.ami}"
    }
}

Probably the provider part must not be settable since its already on the configuration.

@ChrisMcKenzie
Copy link
Author

Yeah I like this way a little better you get a bit more flexibility on how consul is configure (not just whatever the nomad agent is set to!)

@cbednarski
Copy link
Contributor

By the way, if you have an immediate itch to scratch you may also be able to use envconsul or consul-template, both hashicorp projects which have been around awhile and have a lot of features.

@ChrisMcKenzie
Copy link
Author

Has a design doc for this been made, anything we might be able to help with?

@ketzacoatl
Copy link
Contributor

I work around this issue by rendering job specs as templates, which are put in place by CM. Consul KV are consulted by CM when rendering the template. This has worked well for my deployments.

@apparentlymart
Copy link
Member

Maybe I imagined it, but I thought I heard during one of the keynotes at Hashiconf that there was a plan to integrate consul-template's functionality directly into nomad, along with an integration with Vault so that it can get a consul ACL token to act on behalf of the application.

I remember seeing an example something like this:

task "..." {
  # ..

  template {
    source = "something/foo.tmpl"
    destination = "something/foo"
  }

  # ..
}

This was the closest ticket I could find to that. So did I imagine it, or is this something that will come along with the forthcoming Vault token/policy integration?

@dadgar
Copy link
Contributor

dadgar commented Sep 20, 2016

@apparentlymart Its coming :)

@dadgar dadgar added this to the v0.5.0 milestone Sep 20, 2016
@jippi
Copy link
Contributor

jippi commented Sep 28, 2016

@dadgar in 0.5.0 ? :o

@dadgar
Copy link
Contributor

dadgar commented Sep 28, 2016

Yep!

@nanoz
Copy link
Contributor

nanoz commented Nov 1, 2016

I've learned from @dadgar that a template stanza will be added in order to generate config files, but there won't be a direct way to read Consul KV in the env stanza unfortunately.

A wrapper script, reading from a config file and setting those environment variables would be needed before starting your main job if your app respects 12 factor.

@dadgar
Copy link
Contributor

dadgar commented Nov 9, 2016

Hey I am going to close this since initial support has landed in 0.5.0 via the template stanza: https://www.nomadproject.io/docs/job-specification/template.html

Retrieving via env vars is a different enhancement :)

@dadgar dadgar closed this as completed Nov 9, 2016
@ghost
Copy link

ghost commented Dec 14, 2016

@dadgar I can't see any other ticket regarding an enhancement for env vars or am I wrong? Is there any plan to add this functionality in the future?

The template stanza is good but similar functionality applied to the env stanza would be awesome!

@samber
Copy link
Contributor

samber commented Dec 14, 2016

@paddycr In the infrastructure we are building, we never run jobs with the nomad client. We use Terraform.

1- We retrieve environment variables from consul with terraform data sources consul_keys and consul_key_prefix (not merged yet, need some fixes: hashicorp/terraform#10353).

2- We inject data source outputs in a templated Nomad job file.

3-We send the rendered template in the Terraform nomad_job resource (Nomad provider has been released today in Terraform 0.8)

nomad_job/nomad_job.tf

##
# NAME        : nomad_job/nomad_job.tf
# DESCRIPTION : Templated nomad job
# DOC         :
##

##
# VARIABLES
##

variable "vars_env"              { type = "map" }      // variables to template the nomad job description (ex: datacenter, exposed ports...)
variable "vars_job_config"       { type = "map" }      // variables to send into container env vars
variable "nomad_tpl"             { }
variable "_max_number_vars"      { default = 42 }


##
# Resources
##

data "template_file" "vars_env" {
    // Terraform does not allow seting a count to a "computed" value (ex: length(keys(vars)) does not works).
    // So "count" will loop 42 times on "vars", but also make (42 - length(keys(vars))) duplicated keys
    // To remove duplicates, we just apply the "distinct" interpolation. Then length(distinct(data.template_file.vars_env)) will be equal to length(keys(vars))
    // :puke:
   count = "${var._max_number_vars}"
   template = "$${key} = \"$${value}\""

   vars = {
      key = "${element(keys(var.vars_env), count.index)}"
      value = "${lookup(var.vars_env, element(keys(var.vars_env), count.index))}"
   }
}

data "template_file" "jobspec" {
    template = "${file("${var.nomad_tpl}")}"

    vars = "${merge(
         var.vars_job_config,
         map("ENVIRONMENT", join("\n", distinct(data.template_file.vars_env.*.rendered)))
    )}"
}

// warning: terraform 0.8.0
resource "nomad_job" "job" {
   jobspec = "${data.template_file.jobspec.rendered}"
}


##
# Outputs
##

output "jobspec" { value = "${data.template_file.jobspec.rendered}" }

redis.nomad

job "redis" {
    region = "${region}"
    datacenters = ["${datacenter}"]

    [...]

    group "redis" {

        constraint {
            attribute = "$${node.class}"
            operator  = "="
            value     = "data"
        }

        task "redis" {
            driver = "docker"
            config {
                image = "redis:3.2"
                port_map {
                    db       = 6379
                }
            }

            resources {
                cpu             = 2000
                memory      = 8000
                disk            = 0
                network {
                    mbits       = 500
                    port "db" {
                         static = ${REDIS_PORT}
                    }
               }
          }

           env {
                ${ENVIRONMENT}
           }
    }
}

main.tf

variable "region"                   { }
variable "datacenter"           { }

provider "consul" {
    address       = "1.2.3.4:8500"
    datacenter    = "${var.datacenter}"
}

provider "nomad" {
   address        = "http://1.2.3.4:4646"
   region         = "${var.region}"
}

data "consul_keys" "read_env" {
     key {
        name = "REDIS_PASSWORD"
        path = "redis/env/REDIS_PASSWORD"
    }
}

data "consul_keys" "read_job_config" {
    key {
        name = "REDIS_PORT"
        path = "redis/nomad_job/REDIS_PORT"
    }
}


module "redis_job" {
    source = "./nomad_job"

    vars_env = "${data.consul_keys.read_env.var}"
    vars_job_config = "${merge(
        data.consul_keys.read_job_config.var,
        map(
            "region",
            "${var.region}",
            "datacenter",
            "${var.datacenter}"
        )
    )}"
    nomad_tpl = "${path.module}/redis.nomad"
}

output "redis_jobspec" { value = "${module.redis_job.jobspec}" }

Applying the nomad job:

$ terraform plan
$ terraform apply

This is generic and this nomad_job module can be called as often you need, with different env variables and nomad job file.

@rokka-n
Copy link

rokka-n commented Jan 26, 2017

Interesting approach. I can see its working well in Atlas, where sandboxes are isolated well.
Running this from managed CI server over internet could be too complicated.
Also terraform is way too slow in plan/apply, with hundreds of jobs I imagine it will be painful.

@dadgar
Copy link
Contributor

dadgar commented Jan 28, 2017

@paddycr There is an issue for that open: #1765

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Dec 16, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

10 participants