Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

many subsequent calls to disk_free_space cause large directories to load very slowly #28344

Closed
geluk opened this issue Aug 7, 2021 · 2 comments
Labels

Comments

@geluk
Copy link

geluk commented Aug 7, 2021

When navigating to a directory with many subdirectories in the web UI, the PROPFIND request to fetch the directory contents takes a long time to complete (12 seconds in my case). After some digging around I found out that this is almost entirely due to the disk_free_space function being called once for every subdirectory, with each call taking around 150ms. To verify that this is indeed the problem, I temporarily replaced the function call with a dummy value, which reduced the execution time of the PROPFIND request to under a second.

  • When navigating to a directory in the web UI, a PROPFIND request is made, which includes <d:quota-available-bytes />.
  • app/dav/lib/Connector/Sabre/Directory.php:getQuotaInfo() gets called once for every subdirectory. getQuotaInfo in turn calls:
  • lib/private/legacy/OC_Helper.php:getStorageInfo() which calls $sourceStorage->free_space(), which goes through a few layers of abstraction and ends up in:
  • lib/private/Files/Storage/Local.php:free_space(). Here the PHP function disk_free_space() gets called, which takes, in my case, about 150ms to return.

My data directory is a mounted SMB share, which I assume is the reason why this function call takes a little longer. Running df -h on the share likewise takes about 150ms.

For now, I've moved the data directory to NFS (which for some reason doesn't seem to suffer from this slowdown), which has dramatically improved performance - the PROPFIND now takes 250ms to complete. While this fixed the problem for me, I'm still reporting the issue here since it's probably still worth fixing (perhaps the amount of free space can be cached for a short while per filesystem, for instance?). Also, I've seen a few other reports of slow performance (#23930, #24641, #25386), which could possibly be related to this, so perhaps my findings can be of use in resolving those issues. If not, feel free to close this.

@geluk geluk added 0. Needs triage Pending check for reproducibility or if it fits our roadmap bug labels Aug 7, 2021
@szaimen szaimen added 1. to develop Accepted and waiting to be taken care of enhancement feature: filesystem performance 🚀 and removed bug 0. Needs triage Pending check for reproducibility or if it fits our roadmap labels Aug 8, 2021
@szaimen
Copy link
Contributor

szaimen commented Aug 8, 2021

cc @nextcloud/server

@juliushaertl
Copy link
Member

Caching within the Local storage class seems a bit risky to me as that might report wrong information if the free space is queried before and after a file operation, so it would return outdated information there and could have quite some unexpected side effects. Maybe there is some mount parameter for the smb mount to improve that, otherwise NFS would definitely be the recommended way for such a scenario. For SMB there is external storage integration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants