Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

vsphere-iso: parallel builds interfering with each other #250

Closed
hc-github-team-packer opened this issue Feb 1, 2023 · 4 comments
Closed
Labels
bug builder/vsphere-iso Builder: vsphere-iso

Comments

@hc-github-team-packer
Copy link

This issue was originally opened by @djpbessems in hashicorp/packer#12232 and has been migrated to this repository. The original issue description is below.


Community Note

  • Please vote on this issue by adding a 馃憤 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Overview of the Issue

I鈥檓 running a Packer build based on vsphere-iso where I specify multiple sources, with the only difference being their name and vm_name field. The respective provisioners used during the build are then provided ${source.name} as a parameter, which causes them to execute differently.

This part works fine.

However, about half of the time, at the end of the build when one of the VM鈥檚 is being shut down, it actually shuts down the wrong VM, and then downloads the wrong disk (even though Packer鈥檚 output shows the right disk) and continues running a postprocessor (running ovftool with some added logic, again with ${source.name} as a parameter to determine respective configuration). This results in .ova鈥檚 that have the wrong content.

I tried using ${build.PackerRunUUID} in the vm_name, but even with Packer 1.8.5 I get an error stating that that property does not exist.
I also tried using different export.output_directory values, but that also makes no difference.

Reproduction Steps

See below

Packer version

1.8.5

Simplified Packer Template

The shutdown command:

  shutdown_command        = "echo '${var.ssh_password}' | sudo -S shutdown -P now"
  shutdown_timeout        = "5m"

The build/sources:

build {
  source "vsphere-iso.ubuntu" {
    name = "bootstrap"
    vm_name = "ova.bootstrap-${var.vm_name}"

    export {
      images                = false
      output_directory      = "/output/bootstrap"
    }
  }

  source "vsphere-iso.ubuntu" {
    name = "upgrade"
    vm_name = "ova.upgrade-${var.vm_name}"

    export {
      images                = false
      output_directory      = "/output/upgrade"
    }
  }

  provisioner "ansible" {
    [...]
    extra_arguments  = [
      "--extra-vars", "appliancetype=${source.name}"
    ]
    [...]
  }

  post-processor "shell-local" {
    inline = [
      [...] # Omitted; these are several commands to update .ovf and .mf files
      "ovftool --acceptAllEulas --allowExtraConfig --overwrite \\",
      " '/output/${source.name}/ova.${source.name}-${var.vm_name}.ovf' \\",
      " /destination/airgapped-k8s.${source.name}.ova"
    ]
  }
}

Operating system and Environment details

Running within container debian:11-slim

I am running this on vCenter 7.0.3 (encountered on multiple patch levels including latest).

Log Fragments and crash.log files

An example of this happening:

2023-01-26T16:31:12Z: vsphere-iso.upgrade:
2023-01-26T16:31:12Z: vsphere-iso.upgrade: PLAY RECAP *********************************************************************
2023-01-26T16:31:12Z: vsphere-iso.upgrade: default : ok=66 changed=57 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
2023-01-26T16:31:12Z: vsphere-iso.upgrade:
2023-01-26T16:31:12Z: ==> vsphere-iso.upgrade: Executing shutdown command...
2023-01-26T16:31:34Z: vsphere-iso.bootstrap: changed: [default]
2023-01-26T16:31:34Z: vsphere-iso.bootstrap:
2023-01-26T16:31:34Z: vsphere-iso.bootstrap: PLAY RECAP *********************************************************************
2023-01-26T16:31:34Z: vsphere-iso.bootstrap: default : ok=66 changed=57 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
2023-01-26T16:31:34Z: vsphere-iso.bootstrap:
2023-01-26T16:31:35Z: ==> vsphere-iso.bootstrap: VM is already powered off
2023-01-26T16:31:35Z: ==> vsphere-iso.bootstrap: Deleting Floppy drives...
2023-01-26T16:31:35Z: ==> vsphere-iso.bootstrap: Eject CD-ROM drives...
2023-01-26T16:31:35Z: ==> vsphere-iso.bootstrap: Deleting CD-ROM drives...
2023-01-26T16:31:35Z: vsphere-iso.bootstrap: Starting export...
2023-01-26T16:31:35Z: vsphere-iso.bootstrap: Downloading: ova.bootstrap-847-79b794dba2-disk-0.vmdk
2023-01-26T16:36:13Z: ==> vsphere-iso.upgrade: Provisioning step had errors: Running the cleanup provisioner, if present...
2023-01-26T16:36:13Z: ==> vsphere-iso.upgrade: Power off VM...
2023-01-26T16:36:13Z: ==> vsphere-iso.upgrade: Destroying VM...
2023-01-26T16:36:14Z: ==> vsphere-iso.upgrade: Deleting cd_files image from remote datastore ...
2023-01-26T16:36:14Z: Build 'vsphere-iso.upgrade' errored after 36 minutes 42 seconds: Timeout while waiting for machine to shut down.
2023-01-26T16:38:29Z: vsphere-iso.bootstrap: Exporting file: ova.bootstrap-847-79b794dba2-disk-0.vmdk
[...]

@djpbessems
Copy link

Is there anyone who can help? The issue persists.

@djpbessems
Copy link

I'm getting the impression that this plugin is unmaintained since it was split from the packer codebase itself.

This issue is still present.

@nywilken
Copy link
Member

Hi @djpbessems thanks for reaching out. Apologies for the delayed response here. We've been working across different plugin and Packer priorities so we have been delayed to respond to some issues. That said, the shutdown command is tied to each builder, which use a different communicator to execute the command directly on the running instance.

Is it possible that the vm is being shutdown by a provisioner or some other process?

Before calling the shutdown command the builder will use the vSphere API to check the sate of the running instance. If not in a running state it displays the message you see in the logs about the Vm already being powered off and returns immediately. Seeing as this is happening sometime could you tell me the state of both vms in vSphere when this occurs.

Also can you share the configuration for the source "vsphere-iso" "ubuntu" block. It would be go to see what is being passed in that might be causing this conflict.

@tenthirtyam tenthirtyam added the builder/vsphere-iso Builder: vsphere-iso label Jul 17, 2023
@tenthirtyam tenthirtyam changed the title Packer parallel vsphere-iso builds interfering with each other vsphere-iso: parallel builds interfering with each other Jul 17, 2023
@tenthirtyam
Copy link
Collaborator

No reply by OP and no upvotes.

@tenthirtyam tenthirtyam closed this as not planned Won't fix, can't repro, duplicate, stale Sep 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug builder/vsphere-iso Builder: vsphere-iso
Projects
None yet
Development

No branches or pull requests

4 participants