Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error waiting for AMI #1533

Closed
delitescere opened this issue Sep 26, 2014 · 5 comments · Fixed by #1622
Closed

Error waiting for AMI #1533

delitescere opened this issue Sep 26, 2014 · 5 comments · Fixed by #1622

Comments

@delitescere
Copy link
Contributor

Same error as #617 but after some many minutes of waiting for the transfer.

Packer v0.7.1
Instance AMI

Timeouts are ultimately futile in a distributed environment.

What we really want is a way to retry, and do so without repeating all the expensive / heavy work if necessary (idempotent). It might be that we can skip to certain states in packer, passing appropriate ids on the command line (like the URL of the AMI on S3 if it was bundled successfully).

In lieu of that, the packer user should be able to specify timeouts and retry counts appropriate to their context. Packer can have sensible defaults, but those defaults aren't always adequate.

    amazon-instance: Bundle upload completed.
    amazon-instance:
    amazon-instance:
==> amazon-instance: Pausing after run of step 'StepUploadBundle'. Press enter to continue. 
==> amazon-instance: Registering the AMI...
==> amazon-instance: AMI: ami-47f0f802
==> amazon-instance: Waiting for AMI to become ready...
==> amazon-instance: Pausing after run of step 'StepRegisterAMI'. Press enter to continue. 
==> amazon-instance: Copying AMI (ami-47f0f802) to other regions...
    amazon-instance: Copying to: us-west-1
    amazon-instance: Copying to: eu-west-1
    amazon-instance: Copying to: ap-southeast-2
    amazon-instance: Waiting for all copies to complete...
==> amazon-instance: 1 error(s) occurred:
==> amazon-instance: 
==> amazon-instance: * Error waiting for AMI (ami-ecb5149b) in region ({eu-west-1 https://ec2.eu-west-1.amazonaws.com https://s3-eu-west-1.amazonaws.com  %!s(bool=true) %!s(bool=true) https://sdb.eu-west-1.amazonaws.com https://sns.eu-west-1.amazonaws.com https://sqs.eu-west-1.amazonaws.com https://iam.amazonaws.com https://elasticloadbalancing.eu-west-1.amazonaws.com https://autoscaling.eu-west-1.amazonaws.com https://rds.eu-west-1.amazonaws.com https://route53.amazonaws.com}): Get https://ec2.eu-west-1.amazonaws.com/?AWSAccessKeyId=AKIAI3I6RKNJYJ55APEQ&Action=DescribeImages&ImageId.1=ami-ecb5149b&Signature=X5F4Y8jarfyj567cVilxdQykuwFDAY1b8R2SZ4m4qmo%3D&SignatureMethod=HmacSHA256&SignatureVersion=2&Timestamp=2014-09-26T07%3A28%3A56Z&Version=2014-05-01: dial tcp 178.236.4.54:443: i/o timeout
==> amazon-instance: Pausing before cleanup of step 'StepRegisterAMI'. Press enter to continue. 
@delitescere
Copy link
Contributor Author

PS: The AMIs made it to those regions, so

==> Builds finished but no artifacts were created.

is a bit of a pork pie.

@nchammas
Copy link
Contributor

PS: The AMIs made it to those regions, so
==> Builds finished but no artifacts were created.
is a bit of a pork pie.

I am seeing this as well with the amazon-ebs builder, most commonly when I specify ami_regions in my template and the initial AMI is created successfully and is then being copied to different AMI regions.

Oftentimes the AMI region copies time out and Packer exits, reporting that no artifacts were created. But when I check manually I find that some of the AMI copies succeeded and Packer didn't detect that and didn't clean up after itself. So I'm left with several AMIs and snapshots in several regions that I have to cleanup myself.

As @delitescere suggested, having a way for Packer to automatically retry certain actions like AMI region copies or having Packer be able to restart at a certain point in the process (e.g. initial AMI is done; just continue from region copy onwards) would be very helpful. Furthermore, if Packer absolutely cannot continue, it would be nice if it correctly reported on or cleaned up any leftover AMIs and snapshots.

@nchammas
Copy link
Contributor

For fun, I posted a $15 bounty for this issue.

mitchellh added a commit that referenced this issue Oct 28, 2014
builder/amazon: Extend timeout and allow user override [GH-1533]
@renat-sabitov
Copy link

Not the same issue, but probably connected:

2014/10/30 10:42:47 [INFO] Packer version: 0.8.0 dev 399837b048e08fc25664728333c235fd09430f69
2014/10/30 10:42:47 Packer Target OS/Arch: linux amd64
2014/10/30 10:42:47 Built with Go Version: go1.3.3
.....
2014/10/30 10:42:48 ui: �[1;32m==> amazon-ebs: Launching a source AWS instance...�[0m
�[1;32m==> amazon-ebs: Launching a source AWS instance...�[0m
2014/10/30 10:42:50 ui: �[0;32m    amazon-ebs: Instance ID: i-881c9b47�[0m
2014/10/30 10:42:50 ui: �[1;32m==> amazon-ebs: Waiting for instance (i-881c9b47) to become ready...�[0m
�[0;32m    amazon-ebs: Instance ID: i-881c9b47�[0m
2014/10/30 10:42:50 packer-builder-amazon-ebs: 2014/10/30 10:42:50 Waiting for state to become: running
2014/10/30 10:42:50 packer-builder-amazon-ebs: 2014/10/30 10:42:50 Allowing 300s to complete (change with AWS_TIMEOUT_SECONDS)
�[1;32m==> amazon-ebs: Waiting for instance (i-881c9b47) to become ready...�[0m
2014/10/30 10:43:05 packer-builder-amazon-ebs: 2014/10/30 10:43:05 Error on InstanceStateRefresh:
2014/10/30 10:43:05 ui error: �[1;31m==> amazon-ebs: Error waiting for instance (i-881c9b47) to become ready:�[0m
�[1;31m==> amazon-ebs: Error waiting for instance (i-881c9b47) to become ready:�[0m
2014/10/30 10:43:05 ui: �[1;32m==> amazon-ebs: Terminating the source AWS instance...�[0m

Although we have 300 seconds wait allowance, wait is erroring after 15 second without explanation

@renat-sabitov
Copy link

This bug should be reopened, just 5 seconds wait and error:

2014/10/30 12:15:03 ui: �[1;32m==> amazon-ebs: Waiting for AMI to become ready...�[0m
2014/10/30 12:15:03 packer-builder-amazon-ebs: 2014/10/30 12:15:03 Waiting for state to become: available
2014/10/30 12:15:03 packer-builder-amazon-ebs: 2014/10/30 12:15:03 Allowing 300s to complete (change with AWS_TIMEOUT_SECONDS)
�[1;32m==> amazon-ebs: Waiting for AMI to become ready...�[0m
2014/10/30 12:15:08 packer-builder-amazon-ebs: 2014/10/30 12:15:08 Error on AMIStateRefresh:
2014/10/30 12:15:08 ui error: �[1;31m==> amazon-ebs: Error waiting for AMI:�[0m
�[1;31m==> amazon-ebs: Error waiting for AMI:�[0m

renat-sabitov pushed a commit to renat-sabitov/packer that referenced this issue Dec 16, 2014
Relates to hashicorp#1533

AWS is eventually consistent and instance can be not visibile for
some time after creation. This fix eliminates describe-instances
call before going to the proper wait loop
@ghost ghost locked and limited conversation to collaborators Apr 10, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
4 participants