You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 11, 2022. It is now read-only.
During provisioning, any attempt to ssh into the instance fails.
My situation uses the vagrant-aws and vagrant-triggers plugins.
Vagrantfile
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "dummy"
config.vm.provider :aws do |aws, override|
aws.access_key_id = "..."
aws.secret_access_key = "..."
aws.region = "us-east-1"
aws.availability_zone = "us-east-1d"
aws.instance_type = "t1.micro"
aws.keypair_name = "..."
aws.subnet_id = "..."
aws.security_groups = ["..."]
# Ubuntu 14.04.3 paravirtual.
aws.ami = "ami-678b260c"
override.ssh.username = "ubuntu"
override.ssh.private_key_path = "..."
end
config.vm.provision :trigger do |trigger|
trigger.fire do
run "vagrant ssh --command 'curl -s http://169.254.169.254/latest/meta-data/instance-id >./instance_id.txt'"
end
end
end
'vagrant up' results in the following:
Bringing machine 'default' up with 'aws' provider...
==> default: Warning! The AWS provider doesn't support any of the Vagrant
==> default: high-level network configurations (`config.vm.network`). They
==> default: will be silently ignored.
==> default: Warning! You're launching this instance into a VPC without an
==> default: elastic IP. Please verify you're properly connected to a VPN so
==> default: you can access this machine, otherwise Vagrant will not be able
==> default: to SSH into it.
==> default: Launching an instance with the following settings...
<snip>
==> default: Running provisioner: trigger...
==> default: Executing command "vagrant ssh --command curl -s http://169.254.169.254/latest/meta-data/instance-id >./instance_id.txt"...
==> default: An action 'read_state' was attempted on the machine 'default',
==> default: but another process is already executing an action on the machine.
==> default: Vagrant locks each machine for access by only one process at a time.
==> default: Please wait until the other Vagrant process finishes modifying this
==> default: machine, then try again.
==> default:
==> default: If you believe this message is in error, please check the process
==> default: listing for any "ruby" or "vagrant" processes and kill them. Then
==> default: try again.
==> default: Command execution finished.
The command "vagrant ssh --command 'curl -s http://169.254.169.254/latest/meta-data/instance-id >./instance_id.txt'" returned a failed exit code. The
error output is shown below:
An action 'read_state' was attempted on the machine 'default',
but another process is already executing an action on the machine.
Vagrant locks each machine for access by only one process at a time.
Please wait until the other Vagrant process finishes modifying this
machine, then try again.
If you believe this message is in error, please check the process
listing for any "ruby" or "vagrant" processes and kill them. Then
try again.
Note that this can be reproduced without the vagrant-trigger plugin by simply replacing the "config.vm.provision :trigger do |trigger| ..." with:
config.vm.provision :shell, inline: 'sleep 60'
Attempting to ssh into the instance during the time while the "sleep 60" is running will result in the same error message shown above.
This is not the behavior when using the virtualbox provider (i.e. attempting to ssh into the box while the "sleep 60" is running works without issue).
Thanks,
Lance
The text was updated successfully, but these errors were encountered:
Same issue with the recently released vagrant-aws 0.7.0. Creating a VM with nothing but a shell provisioner that sleeps renders the VM unavailable for other commands during provisioning.
The Virtualbox provider doesn't have this limitation. Is there a reason that the AWS provider needs to hold an exclusive lock during provisioning?
During provisioning, any attempt to ssh into the instance fails.
My situation uses the vagrant-aws and vagrant-triggers plugins.
Vagrantfile
'vagrant up' results in the following:
Note that this can be reproduced without the vagrant-trigger plugin by simply replacing the "config.vm.provision :trigger do |trigger| ..." with:
Attempting to ssh into the instance during the time while the "sleep 60" is running will result in the same error message shown above.
This is not the behavior when using the virtualbox provider (i.e. attempting to ssh into the box while the "sleep 60" is running works without issue).
Thanks,
Lance
The text was updated successfully, but these errors were encountered: