-
-
Notifications
You must be signed in to change notification settings - Fork 301
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Amazon S3 Glacier Deep Archive storage class #10681
Comments
Not only is Glacier Deep Archive supported, you can not PUT directly to Glacier and Glacier Deep Archive within the PUT request. Could we get an update to support this functionality? Thanks :-) It would also be nice to make sure any objects required to use Cryptomator to browse a vault, should obviously not be pushed to this storage layer unless a user explicitly requests to do so. It's understandable if I need to wait to restore an object before I can get it from the vault. https://aws.amazon.com/about-aws/whats-new/2018/11/s3-glacier-api-simplification/ |
I think it would be great at least to push to both Glacier and GDA directly as we understand there is a long restore time from Glacier. |
This is not working fully for me on 7.8.5 on Mac - if I drag from a "localhost" pane in CyberDuck to an S3 pane, it gets saved to normal S3, not Glacier. I've set Preferences > S3 > Default Storage Class to "Glacier Deep Archive". If I drag a file/folder from a CyberDuck local pane to my CyberDuck S3 bucket pane, it gets saved as the Regular Amazon S3 Storage class (confirmed in S3 web console). If I drag a file/folder from my Mac's Finder, directly into CyberDuck's S3 window, the file gets saved correctly to Glacier Deep Archive. |
Replying to [comment:6 claudes]:
When dragging from a local disk browser window we do not properly set all attributes for the copy transfer. We will fix this in a separate issue. |
Replying to [comment:7 dkocher]:
In b20cc2f. |
With AWS GDA now out and usable, it would be lovely to be able to directly access it.
The text was updated successfully, but these errors were encountered: