-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High memory consumption when more than 8M files count. #28
Comments
You're probably right. I have not done memory optimization yet. |
I use '-depth 1', so in this case the program needs to memory only the first level while scanning. I think this should not require too much memory. If it would be freed while scanning (e.g. based on -depth level), it would definitly help to keep the memory low. I believe it can be solved easily. For information - here is callstack after exhausting all free 4GB of RAM while scanning: All the best. |
That isn't. Depth defines only what results will be printed, but it's full scan must be done to know size of the root directory. |
Yes, full scan must be done, however when scan of one branch is finished, all nodes below '-depth x' could be forgotten (free), because they are not considered/used later in printing phase. This can save a big amount of memory while scanning. Example of directory structure and printing with '- level 2':
b
c
|
You are right! But it's only memory optimization, not time decreazing. |
Nice approach for using multiple threads for scanning the disk tree is the Windirstat - https://windirstat.net/, that runs as multithread, with limit of threads. |
@jurajazz , hi can you just test a new version before I publish it? I made files metrics will not save in memory if real depth of file upper than parameter "depth". And, there was added system memory allocated metric to results I didnt especially time optimization, I think execution time most related to disk speed than a parrallels computations |
@jurajazz please see above |
Hi Alexander, I started to use linux command du compiled for windows (as part of git package), which does the similar with minimum memory consumptions. I can test also your new version with optimized memory, however instead of compiled executable, I would prefer to compile it myself for security reasons. Could you please commit the changes of source codes - e.g. into special branch or just zip the source codes. It would also save me some time if you could describe the way you suggest to compile it on Windows. Juraj |
o, ok, sources is updated In my case It was 203Mb without optimization & 28Mb with this one (for 101.62 Gb disk, depth=2, limit=20) |
Hi,
I'm using diskusage.exe for more than one year to monitor most consumming directories. It is very simple and handy tool.
However while scanning of one my disk (with logs) containing:
Overall info:
Total time: 4h35m55.7915268s
Total dirs: 499520
Total files: 8887953
Total links: 0
Total size: 10.45 Tb
diskusage.exe (on Windows 64bit) consumes more than 3GB of RAM. It tooks also more than 4 hours.
Is there some way how to optimize at least - memory consumption or also the time needed?
Jurajazz
The text was updated successfully, but these errors were encountered: