Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance: two_percent is faster and more efficient than fzf #561

Open
kimono-koans opened this issue Mar 4, 2024 · 4 comments
Open

Comments

@kimono-koans
Copy link

kimono-koans commented Mar 4, 2024

If the reason you're not using skim is raw performance, my branch, two_percent, is faster, more efficient and uses less memory than fzf for largish inputs (40MB):

> hyperfine -i -w 3 "sk --algo=simple --no-sort --query=hopscotchbubble --exit-0 < ../countwords/kjvbible_x10.txt" "fzf --algo=v1 --no-sort --query=hopscotchbubble --exit-0 < ../countwords/kjvbible_x10.txt" "sk --algo=skim_v2 --no-sort --query=hopscotchbubble --exit-0 < ../countwords/kjvbible_x10.txt"
Benchmark 1: sk --algo=simple --no-sort --query=hopscotchbubble --exit-0 < ../countwords/kjvbible_x10.txt
  Time (mean ± σ):      81.2 ms ±   2.0 ms    [User: 205.2 ms, System: 18.6 ms]
  Range (min … max):    78.7 ms …  86.2 ms    35 runs

  Warning: Ignoring non-zero exit code.

Benchmark 2: fzf --algo=v1 --no-sort --query=hopscotchbubble --exit-0 < ../countwords/kjvbible_x10.txt
  Time (mean ± σ):     125.5 ms ±   2.8 ms    [User: 229.7 ms, System: 72.7 ms]
  Range (min … max):   116.7 ms … 130.8 ms    22 runs

  Warning: Ignoring non-zero exit code.
  Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.

Benchmark 3: sk --algo=skim_v2 --no-sort --query=hopscotchbubble --exit-0 < ../countwords/kjvbible_x10.txt
  Time (mean ± σ):     135.2 ms ±   2.0 ms    [User: 797.5 ms, System: 18.3 ms]
  Range (min … max):   131.6 ms … 140.4 ms    21 runs

  Warning: Ignoring non-zero exit code.

Summary
  sk --algo=simple --no-sort --query=hopscotchbubble --exit-0 < ../countwords/kjvbible_x10.txt ran
    1.55 ± 0.05 times faster than fzf --algo=v1 --no-sort --query=hopscotchbubble --exit-0 < ../countwords/kjvbible_x10.txt
    1.66 ± 0.05 times faster than sk --algo=skim_v2 --no-sort --query=hopscotchbubble --exit-0 < ../countwords/kjvbible_x10.txt

If anyone else is interested in performance improvements, and refactoring to reduce long term memory usage, I've been building on skim actively and I am eager to work with others who are similarly inclined.

Re: @lotabout 's wonderful comment re: performance:

The overhead of skim's item is 88B while fzf's item is 56B.

For ordinary text inputs, I'm using Arc<Box<str>> which I think is 32B? If there is some way to use Arc<str>, that would be even better but I can't seem to make it work re: traits. Re: my 10x duplicated KJV Bible 43M-ish corpus, the memory usage is about 90M on my Mac.

In my experience, skim's memory usage is around 1.5x ~ 2x of fzf's.
Fuzzy finder is the excellent use case of shared memory, but rust has limited support of it.

On ingest, I have a method for deduplicating lines. This is a significant performance win for inputs with lots of empty or the same lines.

Both skim and fzf's matching algorithm are O(n)

Algorithm switching is broken in the latest version. This is fixed in mine. I have a simple algorithm which is much closer to the fzf v1 algo used for large inputs. See above. You too can now write your own super, simple algo or improve mine!

But the size of structure to store matching result is different (skim is bad at it).

I've made a few changes which may help.

Performance of crossbeam's channel seems to be slower than go(not sure)? It's claimed to be fast, but it's still one of the bottlenecks in skim's use case.

My experience has been that ingest was not reading in enough bytes at once, and other threads were spinning wait on the lock.

@nrdxp
Copy link

nrdxp commented Apr 1, 2024

Just curious. Is there a reason why you can't/won't merge these changes back into skim?

@kimono-koans
Copy link
Author

Just curious. Is there a reason why you can't/won't merge these changes back into skim?

As I stated in my post:

If anyone else is interested in performance improvements, and refactoring to reduce long term memory usage, I've been building on skim actively and I am eager to work with others who are similarly inclined.

My experience has been that this project is not actively maintained. I think I still have PRs outstanding: https://github.com/lotabout/skim/pulls/kimono-koans.

If anyone wants to assist, and contribute here, I'd help.

If no one else is so inclined, right now, I'll just keep developing in my own fork/branch.

@d3-X-t3r
Copy link

Just curious, what's the reason for the name "two_percent"? It's kinda hard to remeber (as in, what it does) and also to recommend to folks...

@kimono-koans
Copy link
Author

kimono-koans commented Apr 30, 2024

In the US, at least, we sell milk with different percentages of milk fat. skim has 0% milk fat. Whole milk has something like 4% milk fat. A popular option is called 2% milk or 2%.

If you install cargo install two_percent or cargo deb --install, the binary/command will be installed as sk.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants