Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Assigning static IP to nodes - help! #7714

Closed
adammy123 opened this issue Jul 20, 2016 · 5 comments
Closed

Assigning static IP to nodes - help! #7714

adammy123 opened this issue Jul 20, 2016 · 5 comments

Comments

@adammy123
Copy link

adammy123 commented Jul 20, 2016

Hi, so I have this problem using vSphere to apply terraform nodes and making the IP address static. I don't know why the IP address of the nodes keep jumping - causing errors here and there as I carry on building the infrastructure. I have tried changing the ifcfg-eno(xxx) file (am running on CentOS 7) but am unable to restart the system network. This only applies to the VMs created byterraform apply.

Is there any other way to deploy the nodes with static IP addresses?

(am currently very new to this scene and would appreciate your help! sorry if I've missed out any relevant important information, will update asap!)

Terraform Version

0.6.16

Terraform Configuration Files

provider "vsphere" {
  vsphere_server = "..."         # Your vCenter Address
  user = "..."                                         # vCenter Admin
  password = "..."
  allow_unverified_ssl = "true"
}

module "vsphere-dc" {
  source = "./terraform/vsphere"
  long_name = ""                                        # okay to leave blank
  short_name = "adam-mantl"                             # This will be the prefix for all nodes
  datacenter = "Intern2016"                                     # vCenter Data Center Object
  cluster = "First-Cluster"                             # vCenter Cluster Name
  pool = ""                                             # format is cluster_name/Resources/pool_name
  template = "intern2016/Test-Adam6"    # The VM to use as the source VM
  network_label = "VM Network"                          # The VMW Port-Group Name
  domain = "test.openberl.in"
  dns_server1 = "10.0.4.201"
  dns_server2 = "10.0.4.202"
  datastore = "datastore"                               # Datastore on Cluster
  control_count = 3                                     # How many control nodes to deploy
  worker_count = 3                                      # How many worker nodes to deploy
  edge_count = 2                                        # How many edge nodes to deploy
  kubeworker_count = 0
  control_volume_size = 30                              # Unused - size will be determined by template
  worker_volume_size = 30                               # Unused - size will be determined by template
  edge_volume_size = 30                                 # Unused - size will be determined by template
  ssh_user = "root"                                     # The user in the VM Template to use
  ssh_key = "~/.ssh/id_rsa"                             # Path to the private key on build box
  consul_dc = "mantl"

  #Optional Parameters
  folder = "intern2016/Mantl"                                   # Folder in vCenter, must exist
  control_cpu = "2"
  worker_cpu = "4"
  edge_cpu = "2"
  control_ram = "4096"
  worker_ram = "10240"
  edge_ram = "4096"
  disk_type = "thin"
  #linked_clone = "" # true or false, default is false.  If using linked_clones and have problems installing Mantl, revert to full clones
}

Debug Output

[root@adam-mantl-control-01 ~]# systemctl restart network
Job for network.service failed because the control process exited with error code. See "systemctl status network.service" and "journalctl -xe" for details.

[root@adam-mantl-control-01 ~]# systemctl status network.service
● network.service - LSB: Bring up/down networking
   Loaded: loaded (/etc/rc.d/init.d/network)
   Active: failed (Result: exit-code) since Wed 2016-07-20 12:14:56 UTC; 7s ago
     Docs: man:systemd-sysv-generator(8)
  Process: 27054 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=1/FAILURE)

Jul 20 12:14:56 adam-mantl-control-01 network[27054]: RTNETLINK answers: File exists
Jul 20 12:14:56 adam-mantl-control-01 network[27054]: RTNETLINK answers: File exists
Jul 20 12:14:56 adam-mantl-control-01 network[27054]: RTNETLINK answers: File exists
Jul 20 12:14:56 adam-mantl-control-01 network[27054]: RTNETLINK answers: File exists
Jul 20 12:14:56 adam-mantl-control-01 network[27054]: RTNETLINK answers: File exists
Jul 20 12:14:56 adam-mantl-control-01 network[27054]: RTNETLINK answers: File exists
Jul 20 12:14:56 adam-mantl-control-01 systemd[1]: network.service: control process exited, code=exited status=1
Jul 20 12:14:56 adam-mantl-control-01 systemd[1]: Failed to start LSB: Bring up/down networking.
Jul 20 12:14:56 adam-mantl-control-01 systemd[1]: Unit network.service entered failed state.
Jul 20 12:14:56 adam-mantl-control-01 systemd[1]: network.service failed.

[root@adam-mantl-control-01 ~]# journalctl -xe
-bash: /Users/Adammy123/Downloads/google-cloud-sdk/path.bash.inc: No such file or directory
-bash: /Users/Adammy123/Downloads/google-cloud-sdk/completion.bash.inc: No such file or directory
Muhammads-MacBook-Pro:~ Adammy123$ ssh root@10.0.134.151
root@10.0.134.151's password: 
Last login: Wed Jul 20 12:09:27 2016 from 10.0.134.42
[root@adam-mantl-control-01 ~]# systemctl restart network
Job for network.service failed because the control process exited with error code. See "systemctl status network.service" and "journalctl -xe" for details.
[root@adam-mantl-control-01 ~]# systemctl status network.service
● network.service - LSB: Bring up/down networking
   Loaded: loaded (/etc/rc.d/init.d/network)
   Active: failed (Result: exit-code) since Wed 2016-07-20 12:14:56 UTC; 7s ago
     Docs: man:systemd-sysv-generator(8)
  Process: 27054 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=1/FAILURE)

Jul 20 12:14:56 adam-mantl-control-01 network[27054]: RTNETLINK answers: File exists
Jul 20 12:14:56 adam-mantl-control-01 network[27054]: RTNETLINK answers: File exists
Jul 20 12:14:56 adam-mantl-control-01 network[27054]: RTNETLINK answers: File exists
Jul 20 12:14:56 adam-mantl-control-01 network[27054]: RTNETLINK answers: File exists
Jul 20 12:14:56 adam-mantl-control-01 network[27054]: RTNETLINK answers: File exists
Jul 20 12:14:56 adam-mantl-control-01 network[27054]: RTNETLINK answers: File exists
Jul 20 12:14:56 adam-mantl-control-01 systemd[1]: network.service: control process exited, code=exited status=1
Jul 20 12:14:56 adam-mantl-control-01 systemd[1]: Failed to start LSB: Bring up/down networking.
Jul 20 12:14:56 adam-mantl-control-01 systemd[1]: Unit network.service entered failed state.
Jul 20 12:14:56 adam-mantl-control-01 systemd[1]: network.service failed.
[root@adam-mantl-control-01 ~]# journalctl -xe
Jul 20 12:14:57 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:57 [WARN] raft: Remote peer 10.0.134.151:8300 does not have local node 10.0.134.156:8300 as
Jul 20 12:14:57 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:57 [WARN] raft: Clearing log suffix from 4033 to 4034
Jul 20 12:14:57 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:57 [ERR] consul: failed to wait for barrier: leadership lost while committing log
Jul 20 12:14:57 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:57 [ERR] consul.acl: Failed to get policy for 'ff481b98-5b46-4876-a7f6-70b7c5088ef7': No cl
Jul 20 12:14:58 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:58 [ERR] agent: failed to sync remote state: rpc error: rpc error: rpc error: rpc error: rp
Jul 20 12:14:58 adam-mantl-control-01 consul[9330]: c error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: 
Jul 20 12:14:58 adam-mantl-control-01 consul[9330]: error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rp
Jul 20 12:14:58 adam-mantl-control-01 consul[9330]: ror: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc 
Jul 20 12:14:58 adam-mantl-control-01 consul[9330]: r: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc er
Jul 20 12:14:58 adam-mantl-control-01 consul[9330]: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error
Jul 20 12:14:58 adam-mantl-control-01 consul[9330]: pc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error:
Jul 20 12:14:58 adam-mantl-control-01 consul[9330]: error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rp
Jul 20 12:14:58 adam-mantl-control-01 consul[9330]: rror: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc
Jul 20 12:14:58 adam-mantl-control-01 consul[9330]: or: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc e
Jul 20 12:14:58 adam-mantl-control-01 consul[9330]: : rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc err
Jul 20 12:14:58 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:58 [WARN] raft: Duplicate RequestVote from candidate: 10.0.134.161:8300
Jul 20 12:14:58 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:58 [WARN] raft: Clearing log suffix from 4035 to 4036
Jul 20 12:14:58 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:58 [ERR] raft-net: Failed to flush response: write tcp 10.0.134.156:8300->10.0.134.155:6072
Jul 20 12:15:00 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:00 [WARN] raft: Duplicate RequestVote from candidate: 10.0.134.160:8300
Jul 20 12:15:00 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:00 [WARN] raft: Clearing log suffix from 4037 to 4037
Jul 20 12:15:00 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:00 [WARN] raft: Clearing log suffix from 4038 to 4038
Jul 20 12:15:00 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:00 [WARN] raft: Rejecting vote request from 10.0.134.161:8300 since we have a leader: 10.0.
Jul 20 12:15:00 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:00 [ERR] raft-net: Failed to flush response: write tcp 10.0.134.151:8300->10.0.134.153:4772
Jul 20 12:15:01 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:01 [ERR] agent: failed to sync remote state: No cluster leader
Jul 20 12:15:01 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:01 [WARN] raft: Rejecting vote request from 10.0.134.161:8300 since we have a leader: 10.0.
Jul 20 12:15:01 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:01 [WARN] raft: Rejecting vote request from 10.0.134.161:8300 since we have a leader: 10.0.
Jul 20 12:15:02 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:02 [WARN] raft: Heartbeat timeout reached, starting election
Jul 20 12:15:02 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:02 [WARN] raft: Duplicate RequestVote from candidate: 10.0.134.156:8300
Jul 20 12:15:02 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:02 [WARN] raft: Remote peer 10.0.134.151:8300 does not have local node 10.0.134.156:8300 as
Jul 20 12:15:02 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:02 [ERR] agent: failed to sync remote state: No cluster leader
Jul 20 12:15:03 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:03 [WARN] raft: Election timeout reached, restarting election
Jul 20 12:15:03 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:03 [WARN] raft: Duplicate RequestVote from candidate: 10.0.134.156:8300
Jul 20 12:15:03 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:03 [WARN] raft: Remote peer 10.0.134.151:8300 does not have local node 10.0.134.156:8300 as
Jul 20 12:15:03 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:03 [ERR] consul: failed to wait for barrier: node is not the leader
Jul 20 12:15:03 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:03 [WARN] raft: Clearing log suffix from 4039 to 4039
Jul 20 12:15:04 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:04 [WARN] raft: Rejecting vote request from 10.0.134.161:8300 since we have a leader: 10.0.
Jul 20 12:15:04 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:04 [WARN] raft: Rejecting vote request from 10.0.134.161:8300 since we have a leader: 10.0.
Jul 20 12:15:04 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:04 [WARN] raft: Heartbeat timeout reached, starting election
Jul 20 12:15:04 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:04 [WARN] raft: Duplicate RequestVote from candidate: 10.0.134.156:8300
Jul 20 12:15:04 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:04 [WARN] raft: Remote peer 10.0.134.151:8300 does not have local node 10.0.134.156:8300 as
Jul 20 12:15:04 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:04 [ERR] memberlist: Conflicting address for adam-mantl-control-01. Mine: 10.0.134.156:8301
Jul 20 12:15:04 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:04 [ERR] serf: Node name conflicts with another node at 10.0.134.151:8301. Names must be un
Jul 20 12:15:04 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:04 [WARN] raft: Clearing log suffix from 4040 to 4041
Jul 20 12:15:04 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:04 [ERR] consul: failed to wait for barrier: leadership lost while committing log
Jul 20 12:15:05 adam-mantl-control-01 sudo[27293]:   consul : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/docker ps -a --format {{.Image}}\t{{.Status}}\t{{.N
Jul 20 12:15:05 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:05 [WARN] agent: Check 'distributive-consul-checks' is now warning
Jul 20 12:15:06 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:06 [WARN] raft: Duplicate RequestVote from candidate: 10.0.134.161:8300
Jul 20 12:15:06 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:06 [WARN] raft: Clearing log suffix from 4042 to 4042
Jul 20 12:15:06 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:06 [ERR] raft-net: Failed to flush response: write tcp 10.0.134.156:8300->10.0.134.155:6073
Jul 20 12:15:06 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:06 [WARN] raft: Clearing log suffix from 4043 to 4043
Jul 20 12:15:06 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:06 [ERR] raft-net: Failed to flush response: write tcp 10.0.134.151:8300->10.0.134.155:5060
Jul 20 12:15:06 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:06 [ERR] agent: failed to sync remote state: No cluster leader
[root@adam-mantl-control-01 ~]# journalctl -xe > crash.txt
[root@adam-mantl-control-01 ~]# ls
anaconda-ks.cfg  crash.txt
[root@adam-mantl-control-01 ~]# nano crash.txt

  GNU nano 2.3.1                                          File: crash.txt                                                                                           

-- Logs begin at Wed 2016-07-20 10:29:28 UTC, end at Wed 2016-07-20 12:17:33 UTC. --
Jul 20 12:14:46 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:46 [WARN] raft: Remote peer 10.0.134.151:8300 does not have local node 10.0.134.156:8300 as a $
Jul 20 12:14:46 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:46 [WARN] raft: AppendEntries to 10.0.134.160:8300 rejected, sending older logs (next: 4019)
Jul 20 12:14:46 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:46 [ERR] consul: failed to wait for barrier: leadership lost while committing log
Jul 20 12:14:46 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:46 [WARN] raft: Clearing log suffix from 4020 to 4021
Jul 20 12:14:47 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:47 [WARN] raft: Heartbeat timeout reached, starting election
Jul 20 12:14:47 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:47 [WARN] raft: Duplicate RequestVote from candidate: 10.0.134.156:8300
Jul 20 12:14:47 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:47 [WARN] raft: Remote peer 10.0.134.151:8300 does not have local node 10.0.134.156:8300 as a $
Jul 20 12:14:47 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:47 [WARN] raft: Clearing log suffix from 4022 to 4023
Jul 20 12:14:47 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:47 [ERR] raft: Failed to get log at index 4022: log not found
Jul 20 12:14:47 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:47 [ERR] consul: failed to wait for barrier: leadership lost while committing log
Jul 20 12:14:49 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:49 [ERR] memberlist: Conflicting address for adam-mantl-control-01. Mine: 10.0.134.156:8301 Th$
Jul 20 12:14:49 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:49 [ERR] serf: Node name conflicts with another node at 10.0.134.151:8301. Names must be uniqu$
Jul 20 12:14:49 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:49 [WARN] raft: Heartbeat timeout reached, starting election
Jul 20 12:14:49 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:49 [WARN] raft: Duplicate RequestVote from candidate: 10.0.134.156:8300
Jul 20 12:14:49 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:49 [WARN] raft: Remote peer 10.0.134.151:8300 does not have local node 10.0.134.156:8300 as a $
Jul 20 12:14:49 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:49 [WARN] raft: Clearing log suffix from 4024 to 4025
Jul 20 12:14:49 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:49 [ERR] consul: failed to wait for barrier: leadership lost while committing log
Jul 20 12:14:51 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:51 [WARN] raft: Duplicate RequestVote from candidate: 10.0.134.161:8300
Jul 20 12:14:51 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:51 [WARN] raft: Clearing log suffix from 4026 to 4026
Jul 20 12:14:51 adam-mantl-control-01 sshd[27010]: Accepted password for root from 10.128.16.209 port 55620 ssh2
Jul 20 12:14:52 adam-mantl-control-01 kernel: SELinux: initialized (dev tmpfs, type tmpfs), uses transition SIDs
Jul 20 12:14:52 adam-mantl-control-01 systemd[1]: Created slice user-0.slice.
-- Subject: Unit user-0.slice has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit user-0.slice has finished starting up.
--
-- The start-up result is done.
Jul 20 12:14:52 adam-mantl-control-01 systemd[1]: Starting user-0.slice.
-- Subject: Unit user-0.slice has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit user-0.slice has begun starting up.
Jul 20 12:14:52 adam-mantl-control-01 systemd[1]: Started Session 25 of user root.
-- Subject: Unit session-25.scope has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit session-25.scope has finished starting up.
--
-- The start-up result is done.
Jul 20 12:14:52 adam-mantl-control-01 systemd-logind[782]: New session 25 of user root.
-- Subject: A new session 25 has been created for user root
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel

Expected Behavior

Able to restart network after editting ifcfg file to enable static IP addressing in place of dhcp

Actual Behavior

Unable to restart network

Steps to Reproduce

  1. terraform get, plan, apply
  2. systemctl restart network
@adammy123
Copy link
Author

Has it got to do with adding more labels to this segment under the main.tf file?

network_interface {
    label = "${var.network_label}"
  }

@adammy123
Copy link
Author

adammy123 commented Jul 20, 2016

Okay, so I added these lines to the terraform/vsphere/main.tf file:

variable "control_instance_ips" {
  default = {
    "0" = "10.0.134.160"
    "1" = "10.0.134.161"
    "2" = "10.0.134.162"
  }
}

variable "edge_instance_ips" {
  default = {
    "0" = "10.0.134.163"
    "1" = "10.0.134.164"
  }
}

variable "worker_instance_ips" {
  default = {
    "0" = "10.0.134.165"
    "1" = "10.0.134.166"
    "2" = "10.0.134.167"
  }
}

...

  network_interface {
    label = "${var.network_label}"
    ipv4_gateway = "10.0.134.254"
    ipv4_address = "${lookup(var.control_instance_ips, count.index)}"
    ipv4_prefix_length = "24"
  }

...

  network_interface {
    label = "${var.network_label}"
    ipv4_gateway = "10.0.134.254"
    ipv4_address = "${lookup(var.worker_instance_ips, count.index)}"
    ipv4_prefix_length = "24"
  }

...

  network_interface {
    label = "${var.network_label}"
    ipv4_gateway = "10.0.134.254"
    ipv4_address = "${lookup(var.edge_instance_ips, count.index)}"
    ipv4_prefix_length = "24"
  }

for each type of nodes respectively, but I still don't get the IPs as configured.

Seemed to look fine when terraform plan is run:

[root@localhost mantl-install]# terraform plan
Refreshing Terraform state prior to plan...


The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

+ module.vsphere-dc.vsphere_virtual_machine.mi-control-nodes.0
    cluster:                                   "" => "First-Cluster"
    custom_configuration_parameters.#:         "" => "3"
    custom_configuration_parameters.consul_dc: "" => "mantl"
    custom_configuration_parameters.role:      "" => "control"
    custom_configuration_parameters.ssh_user:  "" => "root"
    datacenter:                                "" => "Intern2016"
    disk.#:                                    "" => "1"
    disk.0.bootable:                           "" => "false"
    disk.0.datastore:                          "" => "datastore"
    disk.0.size:                               "" => "30"
    disk.0.template:                           "" => "intern2016/Test-Adam6"
    disk.0.type:                               "" => "thin"
    dns_servers.#:                             "" => "2"
    dns_servers.0:                             "" => "10.0.4.201"
    dns_servers.1:                             "" => "10.0.4.202"
    domain:                                    "" => "test.openberl.in"
    folder:                                    "" => "intern2016/Mantl"
    linked_clone:                              "" => "false"
    memory:                                    "" => "4096"
    memory_reservation:                        "" => "0"
    name:                                      "" => "adam-mantl-control-01"
    network_interface.#:                       "" => "1"
    network_interface.0.ip_address:            "" => "<computed>"
    network_interface.0.ipv4_address:          "" => "10.0.134.160"
    network_interface.0.ipv4_gateway:          "" => "10.0.134.254"
    network_interface.0.ipv4_prefix_length:    "" => "24"
    network_interface.0.ipv6_address:          "" => "<computed>"
    network_interface.0.ipv6_prefix_length:    "" => "<computed>"
    network_interface.0.label:                 "" => "VM Network"
    network_interface.0.subnet_mask:           "" => "<computed>"
    skip_customization:                        "" => "false"
    time_zone:                                 "" => "Etc/UTC"
    vcpu:                                      "" => "2"

+ module.vsphere-dc.vsphere_virtual_machine.mi-control-nodes.1
    cluster:                                   "" => "First-Cluster"
    custom_configuration_parameters.#:         "" => "3"
    custom_configuration_parameters.consul_dc: "" => "mantl"
    custom_configuration_parameters.role:      "" => "control"
    custom_configuration_parameters.ssh_user:  "" => "root"
    datacenter:                                "" => "Intern2016"
    disk.#:                                    "" => "1"
    disk.0.bootable:                           "" => "false"
    disk.0.datastore:                          "" => "datastore"
    disk.0.size:                               "" => "30"
    disk.0.template:                           "" => "intern2016/Test-Adam6"
    disk.0.type:                               "" => "thin"
    dns_servers.#:                             "" => "2"
    dns_servers.0:                             "" => "10.0.4.201"
    dns_servers.1:                             "" => "10.0.4.202"
    domain:                                    "" => "test.openberl.in"
    folder:                                    "" => "intern2016/Mantl"
    linked_clone:                              "" => "false"
    memory:                                    "" => "4096"
    memory_reservation:                        "" => "0"
    name:                                      "" => "adam-mantl-control-02"
    network_interface.#:                       "" => "1"
    network_interface.0.ip_address:            "" => "<computed>"
    network_interface.0.ipv4_address:          "" => "10.0.134.161"
    network_interface.0.ipv4_gateway:          "" => "10.0.134.254"
    network_interface.0.ipv4_prefix_length:    "" => "24"
    network_interface.0.ipv6_address:          "" => "<computed>"
    network_interface.0.ipv6_prefix_length:    "" => "<computed>"
    network_interface.0.label:                 "" => "VM Network"
    network_interface.0.subnet_mask:           "" => "<computed>"
    skip_customization:                        "" => "false"
    time_zone:                                 "" => "Etc/UTC"
    vcpu:                                      "" => "2"

+ module.vsphere-dc.vsphere_virtual_machine.mi-control-nodes.2
    cluster:                                   "" => "First-Cluster"
    custom_configuration_parameters.#:         "" => "3"
    custom_configuration_parameters.consul_dc: "" => "mantl"
    custom_configuration_parameters.role:      "" => "control"
    custom_configuration_parameters.ssh_user:  "" => "root"
    datacenter:                                "" => "Intern2016"
    disk.#:                                    "" => "1"
    disk.0.bootable:                           "" => "false"
    disk.0.datastore:                          "" => "datastore"
    disk.0.size:                               "" => "30"
    disk.0.template:                           "" => "intern2016/Test-Adam6"
    disk.0.type:                               "" => "thin"
    dns_servers.#:                             "" => "2"
    dns_servers.0:                             "" => "10.0.4.201"
    dns_servers.1:                             "" => "10.0.4.202"
    domain:                                    "" => "test.openberl.in"
    folder:                                    "" => "intern2016/Mantl"
    linked_clone:                              "" => "false"
    memory:                                    "" => "4096"
    memory_reservation:                        "" => "0"
    name:                                      "" => "adam-mantl-control-03"
    network_interface.#:                       "" => "1"
    network_interface.0.ip_address:            "" => "<computed>"
    network_interface.0.ipv4_address:          "" => "10.0.134.162"
    network_interface.0.ipv4_gateway:          "" => "10.0.134.254"
    network_interface.0.ipv4_prefix_length:    "" => "24"
    network_interface.0.ipv6_address:          "" => "<computed>"
    network_interface.0.ipv6_prefix_length:    "" => "<computed>"
    network_interface.0.label:                 "" => "VM Network"
    network_interface.0.subnet_mask:           "" => "<computed>"
    skip_customization:                        "" => "false"
    time_zone:                                 "" => "Etc/UTC"
    vcpu:                                      "" => "2"

+ module.vsphere-dc.vsphere_virtual_machine.mi-edge-nodes.0
    cluster:                                   "" => "First-Cluster"
    custom_configuration_parameters.#:         "" => "3"
    custom_configuration_parameters.consul_dc: "" => "mantl"
    custom_configuration_parameters.role:      "" => "edge"
    custom_configuration_parameters.ssh_user:  "" => "root"
    datacenter:                                "" => "Intern2016"
    disk.#:                                    "" => "1"
    disk.0.bootable:                           "" => "false"
    disk.0.datastore:                          "" => "datastore"
    disk.0.size:                               "" => "30"
    disk.0.template:                           "" => "intern2016/Test-Adam6"
    disk.0.type:                               "" => "thin"
    dns_servers.#:                             "" => "2"
    dns_servers.0:                             "" => "10.0.4.201"
    dns_servers.1:                             "" => "10.0.4.202"
    domain:                                    "" => "test.openberl.in"
    folder:                                    "" => "intern2016/Mantl"
    linked_clone:                              "" => "false"
    memory:                                    "" => "4096"
    memory_reservation:                        "" => "0"
    name:                                      "" => "adam-mantl-edge-01"
    network_interface.#:                       "" => "1"
    network_interface.0.ip_address:            "" => "<computed>"
    network_interface.0.ipv4_address:          "" => "10.0.134.163"
    network_interface.0.ipv4_gateway:          "" => "10.0.134.254"
    network_interface.0.ipv4_prefix_length:    "" => "24"
    network_interface.0.ipv6_address:          "" => "<computed>"
    network_interface.0.ipv6_prefix_length:    "" => "<computed>"
    network_interface.0.label:                 "" => "VM Network"
    network_interface.0.subnet_mask:           "" => "<computed>"
    skip_customization:                        "" => "false"
    time_zone:                                 "" => "Etc/UTC"
    vcpu:                                      "" => "2"

+ module.vsphere-dc.vsphere_virtual_machine.mi-edge-nodes.1
    cluster:                                   "" => "First-Cluster"
    custom_configuration_parameters.#:         "" => "3"
    custom_configuration_parameters.consul_dc: "" => "mantl"
    custom_configuration_parameters.role:      "" => "edge"
    custom_configuration_parameters.ssh_user:  "" => "root"
    datacenter:                                "" => "Intern2016"
    disk.#:                                    "" => "1"
    disk.0.bootable:                           "" => "false"
    disk.0.datastore:                          "" => "datastore"
    disk.0.size:                               "" => "30"
    disk.0.template:                           "" => "intern2016/Test-Adam6"
    disk.0.type:                               "" => "thin"
    dns_servers.#:                             "" => "2"
    dns_servers.0:                             "" => "10.0.4.201"
    dns_servers.1:                             "" => "10.0.4.202"
    domain:                                    "" => "test.openberl.in"
    folder:                                    "" => "intern2016/Mantl"
    linked_clone:                              "" => "false"
    memory:                                    "" => "4096"
    memory_reservation:                        "" => "0"
    name:                                      "" => "adam-mantl-edge-02"
    network_interface.#:                       "" => "1"
    network_interface.0.ip_address:            "" => "<computed>"
    network_interface.0.ipv4_address:          "" => "10.0.134.164"
    network_interface.0.ipv4_gateway:          "" => "10.0.134.254"
    network_interface.0.ipv4_prefix_length:    "" => "24"
    network_interface.0.ipv6_address:          "" => "<computed>"
    network_interface.0.ipv6_prefix_length:    "" => "<computed>"
    network_interface.0.label:                 "" => "VM Network"
    network_interface.0.subnet_mask:           "" => "<computed>"
    skip_customization:                        "" => "false"
    time_zone:                                 "" => "Etc/UTC"
    vcpu:                                      "" => "2"

+ module.vsphere-dc.vsphere_virtual_machine.mi-worker-nodes.0
    cluster:                                   "" => "First-Cluster"
    custom_configuration_parameters.#:         "" => "3"
    custom_configuration_parameters.consul_dc: "" => "mantl"
    custom_configuration_parameters.role:      "" => "worker"
    custom_configuration_parameters.ssh_user:  "" => "root"
    datacenter:                                "" => "Intern2016"
    disk.#:                                    "" => "1"
    disk.0.bootable:                           "" => "false"
    disk.0.datastore:                          "" => "datastore"
    disk.0.size:                               "" => "30"
    disk.0.template:                           "" => "intern2016/Test-Adam6"
    disk.0.type:                               "" => "thin"
    dns_servers.#:                             "" => "2"
    dns_servers.0:                             "" => "10.0.4.201"
    dns_servers.1:                             "" => "10.0.4.202"
    domain:                                    "" => "test.openberl.in"
    folder:                                    "" => "intern2016/Mantl"
    linked_clone:                              "" => "false"
    memory:                                    "" => "10240"
    memory_reservation:                        "" => "0"
    name:                                      "" => "adam-mantl-worker-001"
    network_interface.#:                       "" => "1"
    network_interface.0.ip_address:            "" => "<computed>"
    network_interface.0.ipv4_address:          "" => "10.0.134.165"
    network_interface.0.ipv4_gateway:          "" => "10.0.134.254"
    network_interface.0.ipv4_prefix_length:    "" => "24"
    network_interface.0.ipv6_address:          "" => "<computed>"
    network_interface.0.ipv6_prefix_length:    "" => "<computed>"
    network_interface.0.label:                 "" => "VM Network"
    network_interface.0.subnet_mask:           "" => "<computed>"
    skip_customization:                        "" => "false"
    time_zone:                                 "" => "Etc/UTC"
    vcpu:                                      "" => "4"

+ module.vsphere-dc.vsphere_virtual_machine.mi-worker-nodes.1
    cluster:                                   "" => "First-Cluster"
    custom_configuration_parameters.#:         "" => "3"
    custom_configuration_parameters.consul_dc: "" => "mantl"
    custom_configuration_parameters.role:      "" => "worker"
    custom_configuration_parameters.ssh_user:  "" => "root"
    datacenter:                                "" => "Intern2016"
    disk.#:                                    "" => "1"
    disk.0.bootable:                           "" => "false"
    disk.0.datastore:                          "" => "datastore"
    disk.0.size:                               "" => "30"
    disk.0.template:                           "" => "intern2016/Test-Adam6"
    disk.0.type:                               "" => "thin"
    dns_servers.#:                             "" => "2"
    dns_servers.0:                             "" => "10.0.4.201"
    dns_servers.1:                             "" => "10.0.4.202"
    domain:                                    "" => "test.openberl.in"
    folder:                                    "" => "intern2016/Mantl"
    linked_clone:                              "" => "false"
    memory:                                    "" => "10240"
    memory_reservation:                        "" => "0"
    name:                                      "" => "adam-mantl-worker-002"
    network_interface.#:                       "" => "1"
    network_interface.0.ip_address:            "" => "<computed>"
    network_interface.0.ipv4_address:          "" => "10.0.134.166"
    network_interface.0.ipv4_gateway:          "" => "10.0.134.254"
    network_interface.0.ipv4_prefix_length:    "" => "24"
    network_interface.0.ipv6_address:          "" => "<computed>"
    network_interface.0.ipv6_prefix_length:    "" => "<computed>"
    network_interface.0.label:                 "" => "VM Network"
    network_interface.0.subnet_mask:           "" => "<computed>"
    skip_customization:                        "" => "false"
    time_zone:                                 "" => "Etc/UTC"
    vcpu:                                      "" => "4"

+ module.vsphere-dc.vsphere_virtual_machine.mi-worker-nodes.2
    cluster:                                   "" => "First-Cluster"
    custom_configuration_parameters.#:         "" => "3"
    custom_configuration_parameters.consul_dc: "" => "mantl"
    custom_configuration_parameters.role:      "" => "worker"
    custom_configuration_parameters.ssh_user:  "" => "root"
    datacenter:                                "" => "Intern2016"
    disk.#:                                    "" => "1"
    disk.0.bootable:                           "" => "false"
    disk.0.datastore:                          "" => "datastore"
    disk.0.size:                               "" => "30"
    disk.0.template:                           "" => "intern2016/Test-Adam6"
    disk.0.type:                               "" => "thin"
    dns_servers.#:                             "" => "2"
    dns_servers.0:                             "" => "10.0.4.201"
    dns_servers.1:                             "" => "10.0.4.202"
    domain:                                    "" => "test.openberl.in"
    folder:                                    "" => "intern2016/Mantl"
    linked_clone:                              "" => "false"
    memory:                                    "" => "10240"
    memory_reservation:                        "" => "0"
    name:                                      "" => "adam-mantl-worker-003"
    network_interface.#:                       "" => "1"
    network_interface.0.ip_address:            "" => "<computed>"
    network_interface.0.ipv4_address:          "" => "10.0.134.167"
    network_interface.0.ipv4_gateway:          "" => "10.0.134.254"
    network_interface.0.ipv4_prefix_length:    "" => "24"
    network_interface.0.ipv6_address:          "" => "<computed>"
    network_interface.0.ipv6_prefix_length:    "" => "<computed>"
    network_interface.0.label:                 "" => "VM Network"
    network_interface.0.subnet_mask:           "" => "<computed>"
    skip_customization:                        "" => "false"
    time_zone:                                 "" => "Etc/UTC"
    vcpu:                                      "" => "4"


Plan: 8 to add, 0 to change, 0 to destroy.

But the output turns out different:

[root@localhost mantl-install]# terraform show | grep "name\|ipv4_add"
  name = adam-mantl-control-01
  network_interface.0.ipv4_address = 10.0.134.169
  name = adam-mantl-control-02
  network_interface.0.ipv4_address = 10.0.134.167
  name = adam-mantl-control-03
  network_interface.0.ipv4_address = 10.0.134.164
  name = adam-mantl-edge-01
  network_interface.0.ipv4_address = 10.0.134.165
  name = adam-mantl-edge-02
  network_interface.0.ipv4_address = 10.0.134.166
  name = adam-mantl-worker-001
  network_interface.0.ipv4_address = 10.0.134.162
  name = adam-mantl-worker-002
  network_interface.0.ipv4_address = 10.0.134.163
  name = adam-mantl-worker-003
  network_interface.0.ipv4_address = 10.0.134.168

Am I missing something here?

@adammy123
Copy link
Author

Output while apply terraform apply

module.vsphere-dc.vsphere_virtual_machine.mi-worker-nodes.1: Still creating... (2m0s elapsed)
module.vsphere-dc.vsphere_virtual_machine.mi-worker-nodes.1: Provisioning with 'remote-exec'...
module.vsphere-dc.vsphere_virtual_machine.mi-worker-nodes.1 (remote-exec): Connecting to remote host via SSH...
module.vsphere-dc.vsphere_virtual_machine.mi-worker-nodes.1 (remote-exec):   Host: 10.0.134.177
module.vsphere-dc.vsphere_virtual_machine.mi-worker-nodes.1 (remote-exec):   User: root
module.vsphere-dc.vsphere_virtual_machine.mi-worker-nodes.1 (remote-exec):   Password: false
module.vsphere-dc.vsphere_virtual_machine.mi-worker-nodes.1 (remote-exec):   Private key: true
module.vsphere-dc.vsphere_virtual_machine.mi-worker-nodes.1 (remote-exec):   SSH Agent: false
module.vsphere-dc.vsphere_virtual_machine.mi-worker-nodes.1 (remote-exec): Connected!
module.vsphere-dc.vsphere_virtual_machine.mi-worker-nodes.1: Creation complete

@wirelessben
Copy link

This would have been a really useful, even three years later.

@ghost
Copy link

ghost commented Sep 26, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Sep 26, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants