Skip to content

Conversation

@avinashdevp
Copy link

Description

zhongshanhao and others added 30 commits May 10, 2024 18:27
…een set (apache#13343)

Co-authored-by: zhongshanhao <zhongshanhao@bilibili.com>
…#13355)

This commit fixes an issue in the default flat vector scorer supplier whereby subsequent scorers created by the supplier can affect previously created scorers.

The issue is that we're sharing the backing array from the vector values, and overwriting it in subsequent scorers. We just need to use the ordinal to protect the scorer instance from mutation.
…rer (apache#13356)

Depending on how we quantize and then scale, we can edge down below 0 for dotproduct scores.

This is exceptionally rare, I have only seen it in extreme circumstances in tests (with random data and low dimensionality).
Follow up to: apache#13181

I noticed the quantized interface had a slightly different name.

Additionally, testing showed we are inconsistent when there aren't any vectors to score. This makes the response consistent (e.g. null when there aren't any vectors).
apache#12966 (apache#13358)

Reduce duplication in taxonomy facets; always do counts (apache#12966)

This is a large change, refactoring most of the taxonomy facets code and changing internal behaviour, without changing the API. There are specific API changes this sets us up to do later, e.g. retrieving counts from aggregation facets.

1. Move most of the responsibility from TaxonomyFacets implementations to TaxonomyFacets itself. This reduces code duplication and enables future development. Addresses genericity issue mentioned in apache#12553.
2. As a consequence, introduce sparse values to FloatTaxonomyFacets, which previously used dense values always. This issue is part of apache#12576.
3. Compute counts for all taxonomy facets always, which enables us to add an API to retrieve counts for association facets in the future. Addresses apache#11282.
4. As a consequence of having counts, we can check whether we encountered a label while faceting (count > 0), while previously we relied on the aggregation value to be positive. Closes apache#12585.
5. Introduce the idea of doing multiple aggregations in one go, with association facets doing the aggregation they were already doing, plus a count. We can extend to an arbitrary number of aggregations, as suggested in apache#12546.
6. Don't change the API. The only change in behaviour users should notice is the fix for non-positive aggregation values, which were previously discarded.
7. Add tests which were missing for sparse/dense values and non-positive aggregations.
…e#13306)

Elasticsearch (which based on lucene) can automatically infer types for users with its dynamic mapping feature. When users index some low cardinality fields, such as gender / age / status... they often use some numbers to represent the values, while ES will infer these fields as long, and ES uses BKD as the index of long fields. 

Just as apache#541 said, when the data volume grows, building the result set of low-cardinality fields will make the CPU usage and load very high even if we use a boolean query with filter clauses for low-cardinality fields. 

One reason is that it uses a ReentrantLock to limit accessing LRUQueryCache. QPS and costs of their queries are often high,  which often causes trying locking failures when obtaining the cache, resulting in low concurrency in accessing the cache.

So I replace the ReentrantLock with a ReentrantReadWriteLock. I only use the read lock when I need to get the cache for a query,
The issue outlines the problem. When we have point value dimensions, segment core readers assume that there will be point files.

However, when allowing soft deletes and a document fails indexing failed before a point field could be written, this assumption fails. Consequently, the NRT fails to open. I settled on always flushing a point file if the field info says there are point fields, even if there aren't any docs in the buffer.

closes apache#13353
This commit updates the writer to handle the case where there are no values.

Previously (before apache#13369), there was a check that there were some points values before trying to write, this is no longer the case. The code in writeFieldNDims has an assumption that the values is not empty - an empty values will result in calculating a negative number of splits, and a negate array size to hold the splits.

The fix is trivial, return null when values is empty - null is an allowable return value from this method. Note: writeField1Dim is able to handle an empty values.
…13374)

This commit fixes a corner case in the ScalarQuantizer when just a single vector is present. I ran into this when updating a test that previously passed successfully with Lucene 9.10 but fails in 9.x.

The score error correction is calculated to be NaN, as there are no score docs or variance.
…pache#13376)

The exception happen because the tail postings list block, which encoding with GroupVInt, had a docID delta that was >= 1<<30, when the postings are also storing freqs.
…ults (apache#13361)

Considering that the graphs of 2 indices are organized differently we need to explore a lot of candidates to ensure that both searchers find the same docs. Increasing beamWidth (number of nearest neighbor candidates to track while searching the graph for each newly inserted node) from 5 to 10 fixes the test.
This is generally useful for clients building their own Intervals
implementations.
Parsers may sometimes want to create an IntervalsSource that returns no
intervals.  This adds a new factory method to `Intervals` that will create one,
and changes `IntervalBuilder` to use it in place of its custom empty intervals
source.
…3397)

It sums up max scores in a float when it should sum them up in a double like we
do for `Scorer#score()`. Otherwise, max scores may be returned that are less
than actual scores.

This bug was introduced in apache#13343, so it is not released yet.

Closes apache#13371
Closes apache#13396
… on-heap (apache#13402)

Add a MemorySegment Vector scorer - for scoring without copying on-heap.

The vector scorer loads values directly from the backing memory segment when available. Otherwise, if the vector data spans across segments the scorer copies the vector data on-heap.

A benchmark shows ~2x performance improvement of this scorer over the default copy-on-heap scorer.

The scorer currently only operates on vectors with an element size of byte. We can evaluate if and how to support floats separately.
Add LongObjectHashMap and replace Map<Long, Object>.
Add LongIntHashMap and replace Map<Long, Int>.
Add HPPC dependency to join and spatial modules for primitive values float and double.
…he#13400)

Also rename lucene.document.LongHashSet to DocValuesLongHashSet.
new 'passageSortComparator' option to allow sorting other than offset order

(cherry picked from commit 8773725)
…3405)

While doing an unrelated refactoring, I got hit by this unchecked cast, which
is incorrecw when the presearcher query produces some specialized `BulkScorer`.
(cherry picked from commit 40f674c)

Resolved Conflicts:
	lucene/monitor/src/java/org/apache/lucene/monitor/ForceNoBulkScoringQuery.java
* avoid WrapperDownloader if have the JAR
* don't specify --source
More specific than needed, and some JDK/configs may complain about an incompatibility with --release.

From apache/solr#2419
ChrisHegarty and others added 14 commits December 9, 2024 15:18
…e#13911)

We updated TestGenerateBwcIndices to create int7 HNSW indices instead of int8 with apache#13874.
The corresponding python code part of the release wizard needs to be updated accordingly.
) (apache#14736)

This query assumed that missing value is always of type long.
This modifies it to allow type int as well.
The test is added that fails without this change.

Backport for apache#14732
Add ValueSource.FromDoubleValuesSource.getSortField
Thus reducing indirection and supporting needsScores correctly (fixes a bug).

(cherry picked from commit b0f9923)
* use FloatArrayList\IntArrayList to replace float[]\int[]

* use getScores(int i) to replace scores()

* add more tests

* add change log and change the init value

* update OnHeapHnswGraph ramBytesUsed method

* improve

* add MaxSizedIntArrayList

* add MaxSizedFloatArrayList

* add MaxSizedFloatArrayList

* fixed tests

* revert
* Disable HNSW connectedComponents (apache#14214)

* Adjusting OnHeapHnswGraph RAM estimation to be incremental during build

* changes

* addressing pr comments

* removing extra logging
* Disable HNSW connectedComponents (apache#14214)

* add changes
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment