-
-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extremely high memory usage when running an incremental scan on 200k+ files #480
Comments
woah that's crazy. was this always the case? if not, maybe you could try git bisect to find the commit which introduced this? |
No, it looks like this is fairly recent, I'd say the last 2-3 months. Unfortunately I don't have much time these days, but I'll do a bisect when I can get to it, thanks for the tip. It might take me a while though. |
thanks! also I just pushed a commit which may help |
Huge thanks for this. With 7cd1bee, memory usage stays around 100 MB during a scan and everything works as expected. |
thanks for reporting the issue 👌 |
gonic version: from source, 88e58c0
if from docker, docker tag:
if from source, git tag/branch: master
When running an incremental scan over 200000 files, memory usage ramps up to 5 gb or more.
I am running Arch Linux on a Raspberry Pi 4, using an sqlite database which is about 60 MB in size when the scan starts.
My music directory has 6 symlinks, which all point to various categories/genres.
The scan goes well until memory usage hits about 3-3.5 GB, then it starts increasing by about 100 MB a second.
At this point the scanned directory entries as per the log are printed significantly slower.
The text was updated successfully, but these errors were encountered: