Skip to content
This repository has been archived by the owner on Dec 12, 2019. It is now read-only.

vagrant destroy while VM isn't running causes destroy script timeout #330

Open
zxaos opened this issue Jan 19, 2016 · 4 comments
Open

vagrant destroy while VM isn't running causes destroy script timeout #330

zxaos opened this issue Jan 19, 2016 · 4 comments
Labels

Comments

@zxaos
Copy link
Contributor

zxaos commented Jan 19, 2016

The vagrant destroy scripts assume the VM is running so they try to dump the DB, which fails and takes longer than it needs to when the VM isn't on.

> vagrant destroy
Found project settings file: /Users/zxaos/Developer/external/vlad/settings/vlad_settings.yml
Adjusting Vagrant environment and re-initializing
Found project settings file: /Users/zxaos/Developer/external/vlad/settings/vlad_settings.yml

==> vlad: Running triggers before destroy...
==> vlad: Executing 'halt/destroy' trigger
==> vlad: Executing command "ansible-playbook -i 192.168.100.100, /Users/matt/Developer/external/vlad/vlad_guts/playbooks/local_halt_destroy.yml --private-key=~/.vagrant.d/insecure_private_key --extra-vars local_ip_address=192.168.100.100"...
==> vlad:
==> vlad: PLAY [all] ********************************************************************
==> vlad:
==> vlad: TASK: [local actions destroy | install python-mysqldb package] ****************
==> vlad: fatal: [192.168.100.100] => SSH Error: ssh: connect to host 192.168.100.100 port 22: Operation timed out
==> vlad:     while connecting to 192.168.100.100:22
==> vlad: It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
==> vlad:
==> vlad: FATAL: all hosts have already failed -- aborting
==> vlad:
==> vlad: PLAY RECAP ********************************************************************
==> vlad:            to retry, use: --limit @/Users/matt/local_halt_destroy.retry
==> vlad:
@dixhuit dixhuit added the bug label Jan 19, 2016
@philipnorton42
Copy link
Contributor

This sort of has a bit of a story behind it.

We (as in @danbohea and myself) were toying around with some things a year ago and tried to find out how to detect if the box was up or not. After a few hours of research and some messing about I found that using the hosts.ini file was the only way to detect if the box was running. Creating the file on launch and removing the file on destroy/halting and then using the existence of the file as a way to detect if the box was running when running triggers.

For some reason, and I can't remember why now, we stripped a lot of this out of the Vagrant file. I think it may have been to do with breaking other components, which mean that if the hosts.ini file wasn't present then things broke in interesting ways. Like it wouldn't "up" or destroy the box correctly (although I can't remember why exactly). The issue in question was this #192 , which resulted in the commit:
7905afe

So, solutions:

  • If, by some miracle, there happens to be a way to detect if the box is running then use this.
  • Reimplement the hosts.ini checks in the Vagrant triggers to make sure that the box is running. Which potentially might add back in the issues from above.
  • Live with it. Not ideal, but if it's not fixable then it's not fixable :(
  • Something else...?

@dixhuit
Copy link
Contributor

dixhuit commented Jan 24, 2016

For some reason, and I can't remember why now, we stripped a lot of this out of the Vagrant file.

I think the reason for much of that being stripped out of the Vagrantfile was the introduction of the vagrant-hostsupdater plugin which now handles a lot of what was previously going on with the presence/absence of the host.ini file.

If, by some miracle, there happens to be a way to detect if the box is running then use this.

Could we check the hosts file on the host system against certain regex to see if the box is up possibly? I just quickly checked my own file and noticed a couple entries for VMs that aren't up but they are all Vlad dev VMs that have likely gone awry in testing and general mucking about - by no means "normal use".

@philipnorton42
Copy link
Contributor

Could we check the hosts file on the host system against certain regex to see if the box is up possibly?

Sounds like it might work :) Especially as the hosts file is managed by a Vagrant plugin.

@wizonesolutions
Copy link
Contributor

If you delegate_to: 127.0.0.1 in Ansible you could simply parse the output of vagrant status --machine-readable, no?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

4 participants