Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

s3 key.storage_class always 'STANDARD' #3008

Open
lajarre opened this issue Mar 5, 2015 · 4 comments
Open

s3 key.storage_class always 'STANDARD' #3008

lajarre opened this issue Mar 5, 2015 · 4 comments

Comments

@lajarre
Copy link

lajarre commented Mar 5, 2015

Using boto 2.36.0

Having the key name corresponding to a Glacier-archived file (which was standard before), the following:

import boto.s3
conn = boto.s3.connect_to_region('eu-west-1', ...)
bu = conn.get_bucket('...')
k = bu.get_key(...)

print(k.storage_class)  # Gives 'STANDARD'
@jonathanwcrane
Copy link
Contributor

That's funny, there's also a bug that a restored object remains as storage_class of "GLACIER" even though it's copyable and downloadable.

@jonathanwcrane
Copy link
Contributor

BTW this is a duplicate of #2280 and/or #1173

@jacksonofalltrades
Copy link

This is extremely bad since our business logic depends heavily on storage_class being accurate. Is there an ETA on fixing this?

@jacksonofalltrades
Copy link

@jonathanwcrane That's not a bug, restored objects are always managed by glacier once they are in glacier. There is a separate property you can examine for restored objects called "ongoing_restore" to determine if it's being restored, was restored, or has not yet been restored.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants