-
Notifications
You must be signed in to change notification settings - Fork 458
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade to Lucene 9.9 #2288
Comments
Definitely! I'm in the middle of running our regressions, and then planning on merging in #2275 which is a big code dump. But let's queue up after that? BTW, are there are new codecs introduce that we gotta upgrade? HNSW indexer currently hard codes |
Sure, no hurry.
Indeed, you'll need to replace |
Nice. Is there int16 or float16 as well as intermediate step? When we're ready for that, can you and @tteofili work on that together? |
Not at this point., we're missing native support for float16 in the JVM. |
sure I can work with @jpountz on the upgrade (and perhaps on config options for enabling quantization in HNSW in Anserini) |
And do you have numbers of speed/effectiveness tradeoffs vs. full float32? If not, I guess we should rerun https://arxiv.org/abs/2308.14963 ? |
Mileage varies, the main benefit is that you only need one byte per dimension to fit in RAM to get decent performance, vs. 4 bytes per RAM without scalar quantization. So this allows addressing more data with the same amount of RAM. It turns out that we accidentally turned on quantization on Lucene's nightly benchmarks between Nov 13th and yesterday, there was a noticeable ~30% speedup, even though all vectors already fit in memory at 4 bytes per dimension. http://people.apache.org/~mikemccand/lucenebench/VectorSearch.html @benwtrent might have more info than I do. |
For reference, there have been lots of performance improvements in 9.8 and 9.9 for sparse retrieval too, see e.g. http://people.apache.org/~mikemccand/lucenebench/OrHighHigh.html over recent months. One optimization in particular, apache/lucene#12444 (annotation FK on the nigthly charts, and a blog that describes the optimization) should help significantly with cases that are hard for dynamic pruning, such as learned sparse representations. So I would expect much better numbers for Lucene if you were to run benchmarks from https://arxiv.org/abs/2110.11540 again. |
The PR that did the change has a few more numbers about speed and effectiveness: apache/lucene#12582 (comment) |
re: HNSW - yup, I suppose faster is a given... my question is more about how much you give up in terms of effectiveness... |
Thanks, this is good info. But as I always say... you need a real search task like MS MARCO, BEIR, etc. |
The JVM just doesn't support f16. Reading from disk, doing fast vector operations, etc., its just bad. Even in JDK21. There have been steps to fix this (finally adding an intrinsic for de/encoding f16), but its not there yet. We cannot add f16 until there is something in Panama Vector that handles it. |
Upgrade completed #2302 |
Lucene 9.9 was just released, let's upgrade Anserini? https://lucene.apache.org/core/corenews.html#apache-lucenetm-990-available
The text was updated successfully, but these errors were encountered: