-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Amazon Import Post Processor Fails #52
Comments
Hi @jeremymcgee73 thanks for reaching out. Just a heads up this issue moved to its new home hashicorp/packer-plugin-amazon as the Amazon components are now being maintained in their own repository. With that said it is is possible that this is an eventual consistency issue. Does the resource eventually get created and is reachable on S3? If it is a timing issue you can workaround this by setting a very high value for AWS_MAX_ATTEMPTS and AWS_POLL_DELAY_SECONDS. If after a long wait time you still run into issues I would double check that you are able to create and read from the respective S3 resources. |
Thanks for the reply! I am trying that now, and will let you know what I find out. That does make sense! I thought about that, but didnt think about setting those to be high. If that is the problem, would it be worth adding a pause and retry for this? |
I'm not 100% sure yet. But, I am pretty certain this has solved my problem. Usually 1 out of 5 failed. Thanks! |
Thanks for the quick turn around on testing @jeremymcgee73. If that is working then I would say that the retry logic in place already is working its just that the default values are not high enough, at least for your use case. If this is an issue for other's then it might mean we need to reevaluate the defaults. Keeping this open for now. To better assist with this issue I would recommend adding the the aws_polling configuration option to your templates to avoid having to set ENV variables each time. This was added to override defaults for Amazon services that were not longer enough for some users. Information on the configuration option can be found at https://www.packer.io/docs/builders/amazon/ebs#polling-configuration |
I actually don't think that solved my problem. The OVA is getting copied to S3, because it remains after the run. The problem is very intermittent, maybe only happens every 10 runs. Let me know how else I can help. Thanks!
The settings I'm passing in as ENV vars: |
I took another look at this, I don't believe the AWS poll intervals will help. I believe this is failing on the initial import step, not on the checks to see if its done. I think maybe this particular error could be caught, and tried again? I believe it may be the size of the images, that creates the race condition. Maybe if you add a step to your tests that copy a big file to the image(get the total up to 5/6GB), before the post-processor is ran. |
Hi @nywilken 👋🏼 Just wondering how to set the We're using the hyperv-iso builder so we have something like the following defined:
The documentation for the post-processor only mentions environment variables. In other builds where we are using the
Update: it seems I had the configuration wrong, as per the error message which I should have read properly. The
Perhaps it would be useful if the documentation for the post-processor mentioned the |
This issue was originally opened by @jeremymcgee73 as hashicorp/packer#10873. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
Overview of the Issue
I am having a problem when I am using the Amazon Import post processor after building an ova with vmware-iso. This error does not happen every time, just seems to be random. I am setting AWS_POLL_DELAY_SECONDS=30 before running packer.
Packer version
1.70
Operating system and Environment details
RHEL 7.8
VMware Workstation 15
Log Fragments and crash.log files
The text was updated successfully, but these errors were encountered: