-
Notifications
You must be signed in to change notification settings - Fork 182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Add multithreading / parallelization #1
Comments
Hi, Thanks Rafi |
Parallelization would help a lot in CPU-bound situations. For example, I often grep very large source code repositories which end up fully cached in the file system cache after the first search. Then, GrepWin consumes 100% of one CPU core. If it could use 8 cores it would end up being 4-8 times faster. Some regex based searches also tend to be CPU heavy even in uncached IO situations. AMD is releasing more high core count CPUs now which would increase the advantage further. |
I agree that parallelization would be very useful. I am new to trying the program, and am performing a search over 44,000 files as I write this. The search is taking a long time and is probably CPU bound, although I don't have the metrics to rule out if it is memory bound. I have an 18 core (36 thread) machine, so I suspect things could be several times faster. |
32-core Ryzen user here, working on a UE4 game with a massive amount of files - have been using grepWin with UE4 for years, would love this feature :) |
First of all, great program! I've been using AstroGrep, which is also a fantastic program, but I'm currently performing searches on massive amounts of data (~2,000 text files, 40GB total) and, while I've found grepWin to be 50% faster than AstroGrep, it could still be much, much faster if it supported multithreading/parallelization. I have an 8-core CPU, so plenty of threads available, yet the CPU is barely being utilized at all. The same goes for my SSD, which is at ~5% utilization during a search. Unsurprisingly, there's no difference with a RAM disk (actually, it ran a hair faster on the SSD, though that could just be margin of error). To search all the files for a string takes 8 minutes, but it should be able to do it in 1/10 the time or less if it took full advantage of the hardware.
The text was updated successfully, but these errors were encountered: