You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a clone of #1916 (comment) since the issue is still occuring.
I commented with #1916 (comment), but since this didn't reopen the issue, I think the best thing to do is to reopen a new bug report.
Here are the exact commands I use:
$ aws s3 cp s3://bucket-name/dirname dirname --recursive
download failed: s3://bucket-name/dir-name/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa.gz to dir-name/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa.gz [Errno 36] File name too long: '/home/nav/dir-name/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa.gz.8C2e65Fa'
If I'm understanding this correctly, this is because I'm using eCryptfs which requires filenames to be less than 140 characters (based on https://unix.stackexchange.com/a/32834/215614). So the fix would be checking the actual system's filename limit and creating the temp file name based on that, or just lowering the 255 limit to 140.
The text was updated successfully, but these errors were encountered:
@TheNavigat - Thank you for your post. Currently we are using a max file length of 255. To make it to work with every file system we need to check for system's filename limit before creating temp file. Marking this as a bug.
From the article you linked it looked like if filenames are not encrypted, then you can safely write filenames of up to 255 characters and encrypt their contents, as the filenames written to the lower filesystem will simply match. In the meantime you can change you code to use 140 as max file length and make it to work.
Checking in - I saw a related comment here: aws/aws-cli#3514 (comment). 255 characters does seem to be the system default but I believe S3 itself has a 1024 character limit. Regardless - it seems like this specific issue related to a third-party service and a solution was found. Based on that I think this issue can be closed, but please let us know if you had any follow up questions.
This is a clone of #1916 (comment) since the issue is still occuring.
I commented with #1916 (comment), but since this didn't reopen the issue, I think the best thing to do is to reopen a new bug report.
Here are the exact commands I use:
If I'm understanding this correctly, this is because I'm using eCryptfs which requires filenames to be less than 140 characters (based on https://unix.stackexchange.com/a/32834/215614). So the fix would be checking the actual system's filename limit and creating the temp file name based on that, or just lowering the 255 limit to 140.
The text was updated successfully, but these errors were encountered: