Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

consul_bind_address invalid value #91

Closed
giovannicandido opened this issue Jul 29, 2017 · 4 comments
Closed

consul_bind_address invalid value #91

giovannicandido opened this issue Jul 29, 2017 · 4 comments

Comments

@giovannicandido
Copy link

Hi,

When installing a cluster with vagrant each node with role=server has the error:

fatal: [node2]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'dict object' has no attribute 'consul_bind_address'\n\nThe error appears to have been in '/etc/ansible/roles/brianshumate.consul/tasks/config.yml': line 4, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Create configuration\n ^ here\n"}

The node with bootstrap role is sucessful configured

@giovannicandido
Copy link
Author

Vagrantfile:

require 'yaml'
settings = YAML.load_file 'vagrant.yml'

Vagrant.configure("2") do |config|

  config.vm.define :kmaster do |machine|

    machine.vm.box = "giovanni/xenial64-libvirt"
    machine.vm.hostname = "kmaster"

    machine.vm.network "private_network", ip: settings['kmaster_ip_address']

    machine.vm.provider :libvirt do |domain|
      domain.memory = 1024
      domain.cpus = 2
      domain.storage :file, :size => settings['docker_disk_size'], :device => settings['docker_disk']
    end
  end

  config.vm.define :node1 do |machine|

    machine.vm.box = "giovanni/xenial64-libvirt"
    machine.vm.hostname = "node1"

    machine.vm.network "private_network", ip: settings['node1_ip_address']

    machine.vm.provider :libvirt do |domain|
      domain.memory = 1024
      domain.cpus = 2
      domain.storage :file, :size => settings['docker_disk_size'], :device => settings['docker_disk']      
    end
    
  end

  config.vm.define :node2 do |machine|

    machine.vm.box = "giovanni/xenial64-libvirt"
    machine.vm.hostname = "node2"

    machine.vm.network "private_network", ip: settings['node2_ip_address']

    machine.vm.provider :libvirt do |domain|
      domain.memory = 1024
      domain.cpus = 2
      domain.storage :file, :size => settings['docker_disk_size'], :device => settings['docker_disk']
    end
  end
  
  config.vm.define :node3 do |machine|

    machine.vm.box = "giovanni/xenial64-libvirt"
    machine.vm.hostname = "node2"

    machine.vm.network "private_network", ip: settings['node3_ip_address']

    machine.vm.provider :libvirt do |domain|
      domain.memory = 1024
      domain.cpus = 2
      domain.storage :file, :size => settings['docker_disk_size'], :device => settings['docker_disk']
    end
  end

  config.vm.provision "ansible" do |ansible|
    ansible.sudo = true
    ansible.groups = {
      "docker" => ["kmaster", "node1", "node2", "node3"],
      "cluster_nodes" => ["node1", "node2", "node3"],
      "cluster_servers" => ["node1", "node2"]
    }
    ansible.host_vars = {
      "kmaster" => {"private_ip_address" => settings['kmaster_ip_address']},  
      "node1" => {"consul_node_role" => "bootstrap", "private_ip_address" => settings['node1_ip_address']}, 
      "node2" => {"consul_node_role" => "server", "private_ip_address" => settings['node2_ip_address']},  
      "node3" => {"consul_node_role" => "server", "private_ip_address" => settings['node3_ip_address']}
    }
    ansible.playbook = "devsite.yml"
  end
  config.vm.synced_folder ".", "/mnt", disabled: true
end

Ansible provision:

---
- hosts: docker
  roles:
    - docker
- hosts: cluster_nodes
  become: yes
  become_user: root
  roles:
    - name: brianshumate.consul
      consul_iface: "{{ private_interface | mandatory}}"
      consul_install_remotely: true

Vagrant inventory

# Generated by Vagrant

kmaster ansible_ssh_host=192.168.121.61 ansible_ssh_port=22 ansible_ssh_user='vagrant' ansible_ssh_private_key_file='/home/giovanni/Projects/atende/infra-barebones/.vagrant/machines/kmaster/libvirt/private_key' private_ip_address=192.168.50.2
node1 ansible_ssh_host=192.168.121.235 ansible_ssh_port=22 ansible_ssh_user='vagrant' ansible_ssh_private_key_file='/home/giovanni/Projects/atende/infra-barebones/.vagrant/machines/node1/libvirt/private_key' consul_node_role=bootstrap private_ip_address=192.168.50.3
node2 ansible_ssh_host=192.168.121.216 ansible_ssh_port=22 ansible_ssh_user='vagrant' ansible_ssh_private_key_file='/home/giovanni/Projects/atende/infra-barebones/.vagrant/machines/node2/libvirt/private_key' consul_node_role=server private_ip_address=192.168.50.4
node3 ansible_ssh_host=192.168.121.183 ansible_ssh_port=22 ansible_ssh_user='vagrant' ansible_ssh_private_key_file='/home/giovanni/Projects/atende/infra-barebones/.vagrant/machines/node3/libvirt/private_key' consul_node_role=server private_ip_address=192.168.50.5

[docker]
kmaster
node1
node2
node3

[cluster_nodes]
node1
node2
node3

[cluster_servers]
node1
node2

@giovannicandido
Copy link
Author

Don't know why, but using just:

consul_node_role: server 
consul_bootstrap_expect: true

Works. Now I going to try join the nodes, I will do feedback soon.

Thank you all for the work on this role, makes life easy for everyone ;-)

@giovannicandido
Copy link
Author

Working good, but I have to define consul_raw_key to some value, destroy and recreate the cluster

@giovannicandido
Copy link
Author

Problem is Vagrant no provisioning in parallel, have to set ansible.limit = "all" in vagrant file

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant