Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

File System Output Cache can generate file paths that are too long - can't access the page at all #6115

Closed
joshberry opened this issue Dec 1, 2015 · 7 comments

Comments

@joshberry
Copy link
Contributor

Depending on the directory structure on the host and the parameters for a particular request, the file system output cache feature can generate file paths that are too long to handle. The result when this happens is an exception and the inability to access the page.

image

The actual exception is below but gets caught and thrown as what you see above:

Path.GetFullPath(mappedPath).StartsWith(Path.GetFullPath(basePath), StringComparison.OrdinalIgnoreCase) 'Path.GetFullPath(mappedPath).StartsWith(Path.GetFullPath(basePath), StringComparison.OrdinalIgnoreCase)' threw an exception of type 'System.IO.PathTooLongException'   bool {System.IO.PathTooLongException}

I understand that the technique used for this feature relies on embedding the parameters in the file name. But it would be much better if a check was added to prevent this failure so even if the page can't be cached, it will at least allow access to the non-cached page.

@sebastienros
Copy link
Member

I think we need to hash the url to prevent that, the issue then is that the admin would not be able to show the correct url that is cached, unless there is a separate file associated with the same file that contains the original url. Something like myfile+HASH.html and myfile+HASH.origin with origin containing the full url it was created for. We can keep some predefined sized original prefix to understand visually what the file is.

@sebastienros
Copy link
Member

BTW, how do you find the perf with this provider?

@joshberry
Copy link
Contributor Author

I'm just starting to use it as we're upgrading sites to 1.9.2, but the performance has been great so far. To my eye, it appears just as fast as the in-memory output cache.

Using a url hash for the file name seems like a good solution. I have a temporary workaround in place to check for file paths over 259 characters and bypass caching them. Do you think that would be a useful temporary fix for 1.9.x? If so, I can submit it. I'd also be happy to help with the solution you proposed.

@sebastienros
Copy link
Member

We can't bypass, we need to support all urls, s hashing is the way to go.
If you want to try it please take a look at Media Profiles which does the same thing.

@gvkries
Copy link

gvkries commented May 9, 2017

I just hit this bug (in 1.10.2) and I found a closed pull request #6570 which I think fixes this problem.
But it was never merged into 1.10.x as far as I can see. Is there a reason why the fix is not available in 1.10.x?

@sebastienros sebastienros modified the milestones: Orchard 1.10.x, 1.10.3 May 9, 2017
@Jetski5822
Copy link
Member

Just hit this - darn!

sebastienros added a commit that referenced this issue Mar 29, 2018
Fixes #8004 #6115 
- Caching keys for filenames to prevent too long paths
- Separating metadata from content storage to optimize some scenarios
@jayharris
Copy link
Contributor

Because #7913 has been merged into 1.10.x, should this be closed?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants