-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Large file upload #4
Comments
Hello. I am currently on leave, so can be slow in responding. Are you sure yo're using latest version of the software ? (tag v0.7beta https://github.com/vsespb/mt-aws-glacier/zipball/v0.7beta == current master branch). Will try to reproduce with 20Gb when back from a vacation (next week). Thanks. |
Hi, Thanks for getting back to me and for creating such a useful program. I am using the current version. Like I said when I upload the 20GB file I get that error. When I try to upload a 23gb file I get a "400 bad request" however the 10GB file uploads fine. Both errors happen towards the end of the upload after it has an offset of near completion.Let me know if you have any questions. Hope you have a great vacation.Thanks -------- Original Message -------- |
Hello. I was able to reproduce it. (below is stack trace for latest v0.7beta version, just for clarity) DIE outside EVAL block [0] I enabled verbose HTTP logging in code and it looks to me that it's Amazon problem (at least there are HTTP 500 in responses, which means problem on their side), so I posted a question on Amazon forums https://forums.aws.amazon.com/thread.jspa?threadID=106284&tstart=0 Before they answer, I 'll try to find a workaround (like maybe use less concurrent workers or do more retries of failed requests) |
Amazon is investigating this issue, but seems I found that with 20Gb file, number of parts to upload is ~ 9500 (while max limit is 10000) (I use 2 Mb part size). So I added option "--partsize" - you can specify 1,2,4,8,16 .. any power-of-two number. Try number higher that 2. I was able to upload that file using --partsize=4 (internally script received four HTTP 500 before 5th success request). So --partsize=8 or 16 might work better. |
Amazon answered: https://forums.aws.amazon.com/thread.jspa?messageID=390950
so I will change my code to perform (almost) unlimited number of retries when finishing multipart upload (currently there are 5 with delay 1 second) |
Perfect. I will test further when you do your next release with the number of retries update. Thanks again for patching this so quickly, and keeping me up to date. |
Done. 100 retries with progressive delay. I would still suggest you to use large "--partsize" (Amazon charges for each HTTP request so bigger part size - less charges). |
I'm trying the newer version (0.74beta) to download 2 files and get this error. The older version is working fine for the same command line. MT-AWS-Glacier, part of MT-AWS suite, Copyright (c) 2012 Victor Efimov http://mt-aws.com/ Version 0.74beta { {"message":"The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.\n\nThe Canonical String for this request should have been\n'POST\n/-/vaults/TA/jobs\n\ncontent-type:application/x-www-form-urlencoded; charset=utf-8\nhost:glacier.us-east-1.amazonaws.com\nx-amz-date:20121116T201753Z\nx-amz-glacier-version:2012-06-01\n\ncontent-type;host;x-amz-date;x-amz-glacier-version\n361acdfa41b281d2... PARENT Exit |
fixed, issue #9 |
I know this is an old thread but I hope it's ok to bring this up here. I have some files that are in the 2 - 3 TB range that I'm looking to upload. If this is supported, what partsize and concurrency would you recommend? Let's say the system has 96GB of RAM and 24 cores to work with. |
it should work with mtglacier, but I did not test with such a huge files. limitations of amazon:
thus you need to:
if you're getting errors, you should try decrease concurrency. If mtglacier crash during upload (say, because of new, unknown bug), you'll have to begin from scratch. it does not remember uploaded parts in persistent storage. |
I get the following error when uploading a 20gb file. It works when I upload a 10GB file.
DIE outside EVAL block [0]
Call stack:
main::process_forks(./mtglacier.pl:83)
ChildWorker::process(./mtglacier.pl:175)
GlacierRequest::finish_multipart_upload(ChildWorker.pm:54)
main::ANON(GlacierRequest.pm:284)
Fatal Error: 0 Can't call method "header" on an undefined value at GlacierRequest.pm line 284, line 961.
DIE outside EVAL block [0]
Call stack:
main::process_forks(./mtglacier.pl:83)
ParentWorker::process_task(./mtglacier.pl:213)
ParentWorker::wait_worker(ParentWorker.pm:29)
main::ANON(ParentWorker.pm:53)
Fatal Error: 0 Unexpeced EOF in Pipe at ParentWorker.pm line 53, line 960.
The text was updated successfully, but these errors were encountered: