Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

S3 lifecycle fixes #1831

Merged

Conversation

scotwk
Copy link
Contributor

@scotwk scotwk commented Nov 22, 2017

@kapilt - @jtroberts83 helped me fix two issues with the S3 lifecycle code. One question inline.

  1. I was using items instead of properties for four of the objects. This PR fixes it.
  2. When the user did not have permission to get or put a lifecycle for a bucket it would crash the entire run. I hardened it to log a warning and skip these ones.

Issue #1830

@scotwk scotwk changed the title Fix for S3 lifecycle schema WIP - Fix for S3 lifecycle schema Nov 22, 2017
Scot Kronenfeld added 2 commits November 22, 2017 16:01
…le so that

a single bucket with access problems doesn't kill the entire run.
@scotwk scotwk changed the title WIP - Fix for S3 lifecycle schema WIP - S3 lifecycle fixes Nov 22, 2017
@jtroberts83
Copy link
Contributor

@kapilt - I have tested the latest changes and VOK they work! Thanks Scott

if e.response['Error']['Code'] == 'AccessDenied':
log.warning("Access Denied Bucket:%s while applying lifecycle" % bucket['Name'])
else:
raise e
Copy link
Contributor Author

@scotwk scotwk Nov 23, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kapilt - Is this appropriate to log and skip over any buckets that cannot be accessed? If so, the set-bucket-encryption action probably needs similar logic.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes thats appropriate, ideally is if we can return a set of those buckets names on denied {'denied': [bucket_names]}, action results are serialized as a json output file.

Copy link
Contributor

@jhnlsn jhnlsn Nov 27, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error catching is handled in process. The reason it isn't working is because .exception is called before .result. process needs to be changed to properly handle exceptions so that it doesn't error out when a single bucket has an exception. I don't completely understand why the behavior is that way though.

Copy link
Contributor

@jhnlsn jhnlsn Nov 27, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok I see, exception and result can't be used together. Exception will return return if an exception is raised, however, that does not stop result from also raising the exception if result is called later.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kapilt - is it appropriate to change the lifecycle action to be a subclass of ScanBucket? That will work with the denied_buckets. Otherwise I can move the denied_buckets logic up a level into BucketActionBase

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

scanbucket is for actions that are looking at objects, so it wouldn't be appropriate. simply returning a dictionary from process on the action method would suffice.

@@ -2582,7 +2582,7 @@ def process_bucket(self, bucket):
s3 = bucket_client(local_session(self.manager.session_factory), bucket)

# Adjust the existing lifecycle by adding/deleting/overwriting rules as necessary
config = (bucket['Lifecycle'] or {}).get('Rules', [])
config = (bucket.get('Lifecycle') or {}).get('Rules', [])
Copy link
Contributor Author

@scotwk scotwk Nov 23, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kapilt - This is necessary because in the S3 augment step, if the permission fails for a given method then it doesn't set the key. I wonder if it should set the default instead. This is around line 438

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should probably filter out buckets before processing if we don't have access to get lifecycle configuration, it will be listed in an an annotation deniedmethods (sp?) per the bucket augment.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes - good point. Fixed.

It would be odd to have permissions allowing a person to put the lifecycle but not get it. But if that does happen this would end up deleting the previous lifecycle.

@scotwk scotwk changed the title WIP - S3 lifecycle fixes S3 lifecycle fixes Nov 23, 2017
@@ -2498,15 +2498,15 @@ class Lifecycle(BucketActionBase):
'And': {
'type': 'object',
'additionalProperties': False,
'items': {
'properties': {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so i'm a bit surprised that this got through unit test runs, ci runs with schema validation on for unit test executed policies.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like it passed because the lifecycle options with problematic schemas weren't being tested.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kapilt - there are a ton of options to the lifecycle and they can't all be specified together in a valid policy so I am not exercising every option. I could have one fake test just for validating the schema that tries to utilize every single option and just validates the policy instead of running it. Would that be useful?

Copy link
Collaborator

@kapilt kapilt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, thanks

@kapilt kapilt merged commit 4146cbc into cloud-custodian:master Dec 2, 2017
lamyanba pushed a commit to lamyanba/cloud-custodian that referenced this pull request Apr 23, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants