Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PUT Versioning Incorrectly Succeeds #1919

Closed
mike-bailey opened this issue Nov 16, 2018 · 25 comments
Closed

PUT Versioning Incorrectly Succeeds #1919

mike-bailey opened this issue Nov 16, 2018 · 25 comments
Labels
feature-request A feature should be added or improved. service-api General API label for AWS Services.

Comments

@mike-bailey
Copy link

mike-bailey commented Nov 16, 2018

Please fill out the sections below to help us address your issue

Issue description

Despite having versioning take without issue according to the SDK, the next sent call proceeds to fail, claiming the versioning failed.

image

I can understand if this call is asynchronous and that the response of the PUT doesn't imply the operation completed, but if that's the case there would/should be a callback.

aws-sdk-s3 1.16.0

ruby 2.3.1

puts s3_client.put_bucket_versioning(
      bucket: 'datahere',
      versioning_configuration: {
        status: 'Enabled'
      }
    ).inspect
    # Initialize Cross region replication
    puts s3_client.put_bucket_replication(
      bucket: 'datahere',
      replication_configuration: { # required
        role: 'arn:aws:iam::xxxxxx:role/xxxxx', # required
        rules: [ # required
          {
            prefix: '*', # required
            status: 'Enabled', # required, accepts Enabled, Disabled
            destination: { # required
              bucket: 'arn:aws:s3:::namehere', # required
              storage_class: 'STANDARD', # accepts STANDARD, REDUCED_REDUNDANCY, STANDARD_IA
            }
          }
        ]
      },
      use_accelerate_endpoint: false
    ).inspect
@srchase srchase added the closing-soon This issue will automatically close in 4 days unless further comments are made. label Nov 17, 2018
@srchase
Copy link
Contributor

srchase commented Nov 17, 2018

Thanks for opening this issue.

Just to be clear, versioning is successfully enabled on the origin bucket, but just not in time for the put_bucket_replication call to succeed?

@mike-bailey
Copy link
Author

Seems like it, yes.

@mike-bailey
Copy link
Author

Given they're run synchronously I'd expect one to pass after another.

@srchase srchase removed the closing-soon This issue will automatically close in 4 days unless further comments are made. label Nov 19, 2018
@srchase
Copy link
Contributor

srchase commented Nov 19, 2018

Do you see this error every time? Or does put_bucket_replication sometimes succeed?

Also, are these newly created or existing buckets?

@srchase srchase added the closing-soon This issue will automatically close in 4 days unless further comments are made. label Nov 19, 2018
@mike-bailey
Copy link
Author

It’s occasional. It’s an app that provisions a distinct stack so the bucket create comes a few calls earlier.

@srchase
Copy link
Contributor

srchase commented Nov 19, 2018

@mike-bailey

Thanks for following up.

I have not been able to reproduce this issue, but the evidence indicates that the SDK is executing put_bucket_versioning and put_bucket_replication as expected. S3's underlying consistency model is the likely culprit for what's happening on the service side. Unfortunately, I don't have documentation from S3 that spells out the impact of enabling versioning on subsequent calls to add bucket replication.

If you capture the Request IDs, the S3 service team may be able to assist further on the AWS Developer Forums. Optionally, you could retry setting up replication when you encounter this error, or make a call to confirm that versioning is enabled before doing the put_bucket_replication call.

@mike-bailey
Copy link
Author

I did capture request ID's so I'll post those.

Optionally, you could retry setting up replication when you encounter this error

Wouldn't this require each of my calls having a fallback for the subsequent calls? We make a number of S3 calls that are dependent on the prior one passing, so is the logic that it should go back a step if it fails? We'd risk infinite loops in that case so unfortunately that's a bit of a non-solution.

@mike-bailey
Copy link
Author

mike-bailey commented Nov 19, 2018

Thank you for replying.

To clarify:

the evidence indicates that the SDK is executing put_bucket_versioning and put_bucket_replication as expected

I think it's a little strange to say this is as expected, as it returns no errors but didn't actually happen in a timely fashion, so I'm not sure how that's useful to a developer. I filed it here and not with the support forums because if it is expected behavior, it should have some sort of callback or block or something to indicate request status.

@mike-bailey
Copy link
Author

mike-bailey commented Nov 19, 2018

Not a callback, but something to be able to determine if a call was actually completed.

@srchase srchase removed the closing-soon This issue will automatically close in 4 days unless further comments are made. label Nov 19, 2018
@srchase
Copy link
Contributor

srchase commented Nov 19, 2018

Have you tried inserting a get_bucket_versioning call before the put_bucket_replication?

In my testing, that consistently returned "Enabled" (while the subsequent put_bucket_replication calls succeeeded). I'm curious if you see that returning "Disabled" or not.

Depending on the specifics of your app, you could implement doing a specific number of calls with get_bucket_versioning until "Enabled" is returned, before proceeding to put_bucket_replication. That would be akin to the waiters implemented in the SDK and might provide a workaround while following up with the S3 Service Team.

I can pass along the Request IDs for S3 to take a look to offer further insight on what's happening on the service side.

@mike-bailey
Copy link
Author

mike-bailey commented Nov 19, 2018

Thanks. I can get the request IDs in coming hours and I understand I can implement my own waiters, but this isn’t the only AWS service and only S3 interaction we use in our app, so would we be implementing waiters for all of it? We just spotted this issue like a week or two ago and we have been using this codebase for probably 6mo+

@mike-bailey
Copy link
Author

I'm curious if you see that returning "Disabled" or not.

I can’t think of how to reproduce this without polluting our account (we have a lot of sns stuff) since it would take a number of calls to replicate since it only happens on occasion

@srchase srchase added the service-api General API label for AWS Services. label Nov 19, 2018
@mike-bailey
Copy link
Author

image

Less redacted for IDs

@srchase
Copy link
Contributor

srchase commented Nov 20, 2018

@mike-bailey

Thanks for replying. The S3 Service Team will need the bucket name, and the following headers from the responses:

x-amz-id-2
x-amz-request-id

Your screenshot only gives the partial requestID.

@srchase srchase added the response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. label Nov 20, 2018
@mike-bailey
Copy link
Author

Is there anywhere I can put them out-of-band? I'm relatively low on the food chain at a forensic company so I have to limit public commentary on particulars like bucket names.

As for request ID, none of that is redacted, but I can pull whatever else I have in logs, sure.

@srchase
Copy link
Contributor

srchase commented Nov 21, 2018

@mike-bailey

Does your organization have an AWS Support plan?

You could open a case with AWS Premium Support.

@mike-bailey
Copy link
Author

We have one planned, but haven't gotten it yet. Have to wait for a budget to hit that's way above my pay grade. 😄

@mike-bailey
Copy link
Author

Granted, this made it easier to justify

@srchase
Copy link
Contributor

srchase commented Nov 27, 2018

@mike-bailey

I'm going to close this issue for now. Happy to re-open if there's any followup once you've engaged Support and the Service Team.

@srchase srchase closed this as completed Nov 27, 2018
@mike-bailey
Copy link
Author

I’m still confused as to how this isn’t an SDK issue. So the argument is if there’s a PUT versioning request then a PUT lifecycle request via the SDK the SDK doesn’t garauntee they’ll take in order?

@mike-bailey
Copy link
Author

And if the point is “yes but implement a waiter”, shouldn’t there be a waiter available in the SDK given this is a pretty common scenario in s3?

@srchase srchase added feature-request A feature should be added or improved. and removed response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. labels Nov 27, 2018
@srchase srchase reopened this Nov 27, 2018
@srchase
Copy link
Contributor

srchase commented Nov 27, 2018

@mike-bailey

We can track this as a feature request to implement a waiter.

@mullermp
Copy link
Contributor

mullermp commented Jul 9, 2020

I went to go implement this waiter, but I'm noticing S3's get/put bucket replication is instant.. Perhaps at the time there was some delay? There's nothing to wait for.

@mullermp mullermp closed this as completed Jul 9, 2020
@mike-bailey
Copy link
Author

mike-bailey commented Dec 2, 2020

It didn't have strong concurrency. With the new re:Invent announcements, I think this is actually done now. :)

@mullermp
Copy link
Contributor

mullermp commented Dec 2, 2020

I'm glad to hear. I'm sorry for the delay on this. When this issue was opened, I wasn't working on the Ruby SDK. I don't remember the details, but I looked at this on July 9 (2 years after) and it looked instantaneous to me and I couldn't reproduce. But I'm glad it's finally addressed. :D

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature-request A feature should be added or improved. service-api General API label for AWS Services.
Projects
None yet
Development

No branches or pull requests

4 participants