-
-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
vfs: improve VFS cache to accept very long file/directory names #1907
Comments
|
The chunk before the I guess this is normally stored on drive which allows very large path names, so this hasn't been a problem before. What do you think @remusb ? |
|
or maybe hash the file names to 128 bytes? |
|
It's the first time seen in cache cause the files are now stored on disk too. This wouldn't work on crypt + local either. Sadly, mapping filenames would add complexity to cache which I don't think would benefit it right now. I would like to have this fixed by fixing crypt to be allowed to be wrapped by cache and in that way fix this, performance and cosmetic issues caused. |
|
By that last comment, do you mean allowing remote->crypt->cache to work well? And if so, is there a way to encrypt the cached contents? (Is that what the password is for when setting up a cache remote?) |
|
will it be fixed? |
|
The easiest way is to make crypt work behind cache which is something I intended to do anyway. |
|
Is there any chance for a fix soon? |
|
So this particular issue can't be fixed in a traditional way. It's a limitation of the OS and the fact that cache writes on the disk rather than directly to the cloud provider to provide persistency across rclone restarts. There are multiple options to overcome it:
Note that 1 isn't really a fix to this. It's just a workaround. |
|
Noted with Thanks. |
|
Yes, I do agree, one shouldn't need to rename their files but there's not much we can do about an OS limitation either. If you already have files with more than 143 then it won't work anyway. |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
|
Experienced this too, wow long names ! 282 characters ! What about introducing a second level for crypt? |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
|
Running into the same too and I was thinking we can side-step it by using a file-system that has more generous limits but it seems the 255 limits is kind of the norm even for file-systems designed (for a change) to really store more data than we'll get to have, probably ever (barring some major singularity). Really?! BRB, checking the calendar, yep the end of 2019 I wasn't imagining it. Anyway the only one that seems to have more (4032 bytes) is Reiser4. I'll do some tests to see if this is really the case and they don't mean the total path length or something. |
|
Same here with cache'd crypt: Presumably if the ordering 'cloud remote -> crypt -> cache' could occur without the known issue it would be less of a problem. |
This comment has been minimized.
This comment has been minimized.
|
@ncw @remusb However, the current limits for a typical OS (max path length 255) in a typical setup (cache in a user directory like |
|
@ivandeex [1] https://docs.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation#enable-long-paths-in-windows-10-version-1607-and-later |
|
Correct, still a problem.
… On Jan 25, 2021, at 7:32 AM, Nick Craig-Wood ***@***.***> wrote:
This is fixed with the VFS cache in 4f8ee73
@jarfil - those are the total lengths. I think this original issue was concerned with a path element being > 256 bytes which I think is still an issue on windows/macOS/linux
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
|
I propose the following enhancements to the VFS layer
Does this proposal sound reasonable? |
Some thoughts
|
I weighted between instability/system dependency of such a check on one side and a way to keep existing cache entries unchanged on the other side. Introducing a hard threshold on path segment length will make algorithm stable. However, it will make few existing cache entries with 250-256 char segments "jump" into a "long" cache subtree. We will have to rearrange control flow in vfscache a little to read metadata first and forcibly move such entries to a new place. That will add an extra disk access. Nevertheless, I totally agree with your approach.
This will not work. 250 chars below threshold + 64 chars of the hash makes 314, will not be accepted by local FS. We'll have to replace every long path segment completely by its hash (whether it's a directory name in the middle or a file name at the end). |
Good point. The process that runs through the cache initially could fix these. I'd rather keep them in the same
I meant 250-32 if using a 32 byte hash here. So files would become exactly 250 bytes long. This helps users when trying to recover lost files in the vfs cache. would then become exactly 250 bytes long with the hash on the end |
Unable to write cache chunk due to filename being too long
2017/12/11 07:05:37 ERROR : worker-1 <ur76lghmk0lgu8no3i3i9iphqn8d0ri3ad7rhs2mhdn74igln7od06pnglvrkj77313471lo18omdv6526tit9h46t6k6r9llocpj69lb15a838i41tlekoth970eqvju2ct6pnsjpsqrnj0bm4soahn34q68turb7udlirm9ql16j216d5684fa3udr1fcapervhrknhpc7dggt3qvo5t8ooihp656ulthofgd4l9l68psbi388elk7dq3unkhk>: failed caching chunk in storage 0: open /tmp/rclone-cache-streamer2/streamer2_cache/s5dri5l6lhg70tqicfopng89h4/pkk7v920jgq9jbo3v1lg6goo80/p2kfesuq32ru2q93qh1otf7j2k/pau1o64bl8uiertgbck2vcpmt0pte6vqvfs9t321l4maac6gm1rigf9s7v62pm17f5ds07ter3if5r64p7iubj4kqskb3l1oofac4que2lcn97ab5eor2uqpvsu235bl/ur76lghmk0lgu8no3i3i9iphqn8d0ri3ad7rhs2mhdn74igln7od06pnglvrkj77313471lo18omdv6526tit9h46t6k6r9llocpj69lb15a838i41tlekoth970eqvju2ct6pnsjpsqrnj0bm4soahn34q68turb7udlirm9ql16j216d5684fa3udr1fcapervhrknhpc7dggt3qvo5t8ooihp656ulthofgd4l9l68psbi388elk7dq3unkhk/0: file name too longrclone v1.38-223-g7c972d37β
The text was updated successfully, but these errors were encountered: