-
Notifications
You must be signed in to change notification settings - Fork 23.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
s3_bucket: migrate to boto3 #37189
s3_bucket: migrate to boto3 #37189
Conversation
2eba9a5
to
20c3229
Compare
s3_client.delete_bucket_tagging(Bucket=name) | ||
except (BotoCoreError, ClientError) as e: | ||
module.fail_json_aws(e, msg="Failed to delete bucket tags") | ||
current_tags_dict = tags |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here's a sporadic test failure that a custom waiter would probably fix (actually, not a custom waiter like live in module_utils/aws/waiters, but just a loop that waits and checks, since if adding and removing the same number of tags, an actual waiter won't work right):
TASK [s3_bucket : Do not specify any tag to ensure previous tags are not removed] **********************************************
task path: /Users/shertel/Workspace/ansible/test/integration/targets/s3_bucket/tasks/main.yml:184
File lookup using /Users/shertel/Workspace/ansible/test/integration/targets/s3_bucket/templates/policy.json as file
Using module file /Users/shertel/Workspace/ansible/lib/ansible/modules/cloud/amazon/s3_bucket.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: shertel
<127.0.0.1> EXEC /bin/sh -c 'echo ~ && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/shertel/.ansible/tmp/ansible-tmp-1521813741.78-18776467509358 `" && echo ansible-tmp-1521813741.78-18776467509358="` echo /Users/shertel/.ansible/tmp/ansible-tmp-1521813741.78-18776467509358 `" ) && sleep 0'
<127.0.0.1> PUT /Users/shertel/.ansible/tmp/ansible-local-98052I5QPSp/tmpmypRU6 TO /Users/shertel/.ansible/tmp/ansible-tmp-1521813741.78-18776467509358/s3_bucket.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/shertel/.ansible/tmp/ansible-tmp-1521813741.78-18776467509358/ /Users/shertel/.ansible/tmp/ansible-tmp-1521813741.78-18776467509358/s3_bucket.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/tmp/ansible-test-coverage-2Ex1l6/coverage/injector.py /Users/shertel/.ansible/tmp/ansible-tmp-1521813741.78-18776467509358/s3_bucket.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/shertel/.ansible/tmp/ansible-tmp-1521813741.78-18776467509358/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
"changed": true,
"invocation": {
"module_args": {
"aws_access_key": "redacted",
"aws_secret_key": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"ceph": false,
"ec2_url": null,
"force": false,
"name": "ansible-test-shertel-osx-46829947-testbucket-ansible-complex",
"policy": "{\"Version\": \"2012-10-17\", \"Statement\": [{\"Action\": [\"s3:GetObject\"], \"Sid\": \"AddPerm\", \"Resource\": [\"arn:aws:s3:::ansible-test-shertel-osx-46829947-testbucket-ansible-complex/*\"], \"Effect\": \"Allow\", \"Principal\": \"*\"}]}",
"profile": null,
"region": "us-east-1",
"requester_pays": false,
"s3_url": null,
"security_token": "",
"state": "present",
"tags": null,
"validate_certs": true,
"versioning": false
}
},
"name": "ansible-test-shertel-osx-46829947-testbucket-ansible-complex",
"policy": {
"Statement": [
{
"Action": "s3:GetObject",
"Effect": "Allow",
"Principal": "*",
"Resource": "arn:aws:s3:::ansible-test-shertel-osx-46829947-testbucket-ansible-complex/*",
"Sid": "AddPerm"
}
],
"Version": "2012-10-17"
},
"requester_pays": false,
"tags": {
"example": "tag1-udpated"
},
"versioning": {
"MfaDelete": "Disabled",
"Versioning": "Suspended"
}
}
TASK [s3_bucket : assert] ******************************************************************************************************
task path: /Users/shertel/Workspace/ansible/test/integration/targets/s3_bucket/tasks/main.yml:194
fatal: [localhost]: FAILED! => {
"assertion": "not output.changed",
"changed": false,
"evaluated_to": false
}
It successfully removed a tag in the previous task, so it should wait for the bucket to reflect the correct tags so the next task is idempotent.
module.fail_json(msg=e.message) | ||
s3_client.delete_bucket(Bucket=name) | ||
except (BotoCoreError, ClientError) as e: | ||
module.fail_json_aws(e, msg="Failed to delete bucket") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm getting sporadic errors in the tests with this:
fatal: [localhost]: FAILED! => {
"changed": false,
"error": {
"bucket_name": "ansible-test-shertel-osx-73152613.testbucket.ansible",
"code": "NoSuchBucket",
"message": "The specified bucket does not exist"
},
"invocation": {
"module_args": {
"aws_access_key": "redacted",
"aws_secret_key": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"ceph": false,
"ec2_url": null,
"force": false,
"name": "ansible-test-shertel-osx-73152613.testbucket.ansible",
"policy": null,
"profile": null,
"region": "us-east-1",
"requester_pays": false,
"s3_url": null,
"security_token": "",
"state": "absent",
"tags": null,
"validate_certs": true,
"versioning": null
}
},
"msg": "Failed to delete bucket: An error occurred (NoSuchBucket) when calling the DeleteBucket operation: The specified bucket does not exist",
"response_metadata": {
"host_id": "kfrPIUSyMWBlirnpkbgMHiUbYgrq1LyYF50BVqdwKpnPJ4UuR3CB54TgbQ+cU+s7+Hlq6iTWdRM=",
"http_headers": {
"content-type": "application/xml",
"date": "Fri, 23 Mar 2018 13:43:56 GMT",
"server": "AmazonS3",
"transfer-encoding": "chunked",
"x-amz-id-2": "kfrPIUSyMWBlirnpkbgMHiUbYgrq1LyYF50BVqdwKpnPJ4UuR3CB54TgbQ+cU+s7+Hlq6iTWdRM=",
"x-amz-request-id": "4E6E0CBF718620B7"
},
"http_status_code": 404,
"request_id": "4E6E0CBF718620B7",
"retry_attempts": 0
}
}
I think the issue is it may still be creating or is in the process of being deleted (I think both of those because I ran the tests twice and it failed twice -different tests- with the same error, but I only had one bucket to manually delete). Using the S3 BucketExists and BucketNotExists waiters would probably fix this.
I'm also seeing a sporadic error with idempotence for creation, which I think would be fixed with the BucketExists waiter:
TASK [s3_bucket : Try to update the same bucket with the same values] **********************************************************
task path: /Users/shertel/Workspace/ansible/test/integration/targets/s3_bucket/tasks/main.yml:30
Using module file /Users/shertel/Workspace/ansible/lib/ansible/modules/cloud/amazon/s3_bucket.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: shertel
<127.0.0.1> EXEC /bin/sh -c 'echo ~ && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/shertel/.ansible/tmp/ansible-tmp-1521813525.36-213064420310223 `" && echo ansible-tmp-1521813525.36-213064420310223="` echo /Users/shertel/.ansible/tmp/ansible-tmp-1521813525.36-213064420310223 `" ) && sleep 0'
<127.0.0.1> PUT /Users/shertel/.ansible/tmp/ansible-local-97538M9gW9j/tmpjcF4y2 TO /Users/shertel/.ansible/tmp/ansible-tmp-1521813525.36-213064420310223/s3_bucket.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/shertel/.ansible/tmp/ansible-tmp-1521813525.36-213064420310223/ /Users/shertel/.ansible/tmp/ansible-tmp-1521813525.36-213064420310223/s3_bucket.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/tmp/ansible-test-coverage-hq_Wp7/coverage/injector.py /Users/shertel/.ansible/tmp/ansible-tmp-1521813525.36-213064420310223/s3_bucket.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/shertel/.ansible/tmp/ansible-tmp-1521813525.36-213064420310223/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
"changed": true,
"invocation": {
"module_args": {
"aws_access_key": "redacted",
"aws_secret_key": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"ceph": false,
"ec2_url": null,
"force": false,
"name": "ansible-test-shertel-osx-20178879-testbucket-ansible",
"policy": null,
"profile": null,
"region": "us-east-1",
"requester_pays": false,
"s3_url": null,
"security_token": "",
"state": "present",
"tags": null,
"validate_certs": true,
"versioning": null
}
},
"name": "ansible-test-shertel-osx-20178879-testbucket-ansible",
"policy": null,
"requester_pays": false,
"tags": {},
"versioning": {
"MfaDelete": "Disabled",
"Versioning": "Disabled"
}
}
TASK [s3_bucket : assert] ******************************************************************************************************
task path: /Users/shertel/Workspace/ansible/test/integration/targets/s3_bucket/tasks/main.yml:37
fatal: [localhost]: FAILED! => {
"assertion": "not output.changed",
"changed": false,
"evaluated_to": false
}
7b84e63
to
ef42ace
Compare
The test
|
@s-hertel I wasn't able to reproduce the sporadic errors. However, I've added a "waiter loop" for tags and added boto3 waiters for bucket creation and bucket deletion. If you could give it a try :) |
@julienvey Thanks, I'll test that out today. :-) Since we've started having CI trouble with unstable tests I've changed my testing setup a little bit to automate running the tests n times while recording any failures to get a more accurate picture (because it is difficult to reproduce the errors). |
@julienvey I did some more thorough testing yesterday. I'm getting about 28/50 sporadic failures on this branch for the s3_bucket tests. I know it's hard fixing things you can't reproduce so I'm working on a commit to add to this branch. I don't want to spam you with tracebacks but so we're on the same page, here are the two that are the most frequent (in addition to a number of assertions that occasionally fail, usually because change is not what is expected):
|
@s-hertel no problem. Could you share the code you use to iterate over the tests. This way I would be able to try to reproduce on my side. |
@julienvey Sure. It's an ugly hack though. I'm sure there are more elegant solutions. s3_bucket was easy to add it to because all the registered vars are the same, messier with other tests. s-hertel@3eb7861 If you're able to reproduce the failures now I'll let you work on this and spend my time elsewhere. But let me know if you still can't reproduce and I'll pick this back up. |
@s-hertel I'm able to reproduce some failures with your code, thanks ! Should be much easier to fix now :) |
bff881e
to
31f8b13
Compare
@s-hertel So, it was much easier to reproduce but not so easy to fix :) I've been struggling with the AWS API, and the eventual consistency of the s3 bucket itself (not even the s3 objects). I've seen similar issues in other AWS-related projects.
I've been using your loop test. With the last commit : out of 200 tests, I've only had 1 error (bucket policy which failed to appy), so I'm pretty happy with it. Could you give it a try and have a look at the code to see if the changes are ok ? |
SUMMARY
Following #36941 and in order to be able to implement #35110, this PR migrate the
s3_bucket
module to boto3. The set of implemented features has not changedIndirectly Fixes #37419 & #37383
ISSUE TYPE
COMPONENT NAME
s3_bucket
ANSIBLE VERSION
ADDITIONAL INFORMATION
Some portions of the code (the part which creates a boto connexion for either ceph, fakes3, or aws) was taken from the
aws_s3
module. I have not been able to test the module with something else than the real aws s3.