Skip to content

Commit c1aadf3

Browse files
Update for redisvl 0.10.0 (#2259)
Co-authored-by: redisdocsapp[bot] <177626021+redisdocsapp[bot]@users.noreply.github.com>
1 parent 1c88bfb commit c1aadf3

File tree

3 files changed

+274
-0
lines changed

3 files changed

+274
-0
lines changed

content/develop/ai/redisvl/api/_index.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,13 +19,16 @@ Reference documentation for the RedisVL API.
1919
* [Search Index Classes](searchindex/)
2020
* [SearchIndex](searchindex/#searchindex)
2121
* [AsyncSearchIndex](searchindex/#asyncsearchindex)
22+
* [Vector](vector/)
23+
* [Vector](vector/#id1)
2224
* [Query](query/)
2325
* [VectorQuery](query/#vectorquery)
2426
* [VectorRangeQuery](query/#vectorrangequery)
2527
* [HybridQuery](query/#hybridquery)
2628
* [TextQuery](query/#textquery)
2729
* [FilterQuery](query/#filterquery)
2830
* [CountQuery](query/#countquery)
31+
* [MultiVectorQuery](query/#multivectorquery)
2932
* [Filter](filter/)
3033
* [FilterExpression](filter/#filterexpression)
3134
* [Tag](filter/#tag)

content/develop/ai/redisvl/api/query.md

Lines changed: 239 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1790,3 +1790,242 @@ Return the query parameters.
17901790
#### `property query: BaseQuery`
17911791

17921792
Return self as the query object.
1793+
1794+
## MultiVectorQuery
1795+
1796+
### `class MultiVectorQuery(vectors, return_fields=None, filter_expression=None, num_results=10, dialect=2)`
1797+
1798+
Bases: `AggregationQuery`
1799+
1800+
MultiVectorQuery allows for search over multiple vector fields in a document simulateously.
1801+
The final score will be a weighted combination of the individual vector similarity scores
1802+
following the formula:
1803+
1804+
score = (w_1 \* score_1 + w_2 \* score_2 + w_3 \* score_3 + … )
1805+
1806+
Vectors may be of different size and datatype, but must be indexed using the ‘cosine’ distance_metric.
1807+
1808+
```python
1809+
from redisvl.query import MultiVectorQuery, Vector
1810+
from redisvl.index import SearchIndex
1811+
1812+
index = SearchIndex.from_yaml("path/to/index.yaml")
1813+
1814+
vector_1 = Vector(
1815+
vector=[0.1, 0.2, 0.3],
1816+
field_name="text_vector",
1817+
dtype="float32",
1818+
weight=0.7,
1819+
)
1820+
vector_2 = Vector(
1821+
vector=[0.5, 0.5],
1822+
field_name="image_vector",
1823+
dtype="bfloat16",
1824+
weight=0.2,
1825+
)
1826+
vector_3 = Vector(
1827+
vector=[0.1, 0.2, 0.3],
1828+
field_name="text_vector",
1829+
dtype="float64",
1830+
weight=0.5,
1831+
)
1832+
1833+
query = MultiVectorQuery(
1834+
vectors=[vector_1, vector_2, vector_3],
1835+
filter_expression=None,
1836+
num_results=10,
1837+
return_fields=["field1", "field2"],
1838+
dialect=2,
1839+
)
1840+
1841+
results = index.query(query)
1842+
```
1843+
1844+
Instantiates a MultiVectorQuery object.
1845+
1846+
* **Parameters:**
1847+
* **vectors** (*Union* *[*[*Vector*]({{< relref "vector/#vector" >}}) *,* *List* *[*[*Vector*]({{< relref "vector/#vector" >}}) *]* *]*) – The Vectors to perform vector similarity search.
1848+
* **return_fields** (*Optional* *[* *List* *[* *str* *]* *]* *,* *optional*) – The fields to return. Defaults to None.
1849+
* **filter_expression** (*Optional* *[* *Union* *[* *str* *,* [*FilterExpression*]({{< relref "filter/#filterexpression" >}}) *]* *]*) – The filter expression to use.
1850+
Defaults to None.
1851+
* **num_results** (*int* *,* *optional*) – The number of results to return. Defaults to 10.
1852+
* **dialect** (*int* *,* *optional*) – The Redis dialect version. Defaults to 2.
1853+
1854+
#### `add_scores()`
1855+
1856+
If set, includes the score as an ordinary field of the row.
1857+
1858+
* **Return type:**
1859+
*AggregateRequest*
1860+
1861+
#### `apply(**kwexpr)`
1862+
1863+
Specify one or more projection expressions to add to each result
1864+
1865+
### `Parameters`
1866+
1867+
- **kwexpr**: One or more key-value pairs for a projection. The key is
1868+
: the alias for the projection, and the value is the projection
1869+
expression itself, for example apply(square_root="sqrt(@foo)")
1870+
1871+
* **Return type:**
1872+
*AggregateRequest*
1873+
1874+
#### `dialect(dialect)`
1875+
1876+
Add a dialect field to the aggregate command.
1877+
1878+
- **dialect** - dialect version to execute the query under
1879+
1880+
* **Parameters:**
1881+
**dialect** (*int*)
1882+
* **Return type:**
1883+
*AggregateRequest*
1884+
1885+
#### `filter(expressions)`
1886+
1887+
Specify filter for post-query results using predicates relating to
1888+
values in the result set.
1889+
1890+
### `Parameters`
1891+
1892+
- **fields**: Fields to group by. This can either be a single string,
1893+
: or a list of strings.
1894+
1895+
* **Parameters:**
1896+
**expressions** (*str* *|* *List* *[* *str* *]*)
1897+
* **Return type:**
1898+
*AggregateRequest*
1899+
1900+
#### `group_by(fields, *reducers)`
1901+
1902+
Specify by which fields to group the aggregation.
1903+
1904+
### `Parameters`
1905+
1906+
- **fields**: Fields to group by. This can either be a single string,
1907+
: or a list of strings. both cases, the field should be specified as
1908+
@field.
1909+
- **reducers**: One or more reducers. Reducers may be found in the
1910+
: aggregation module.
1911+
1912+
* **Parameters:**
1913+
* **fields** (*List* *[* *str* *]*)
1914+
* **reducers** (*Reducer* *|* *List* *[* *Reducer* *]*)
1915+
* **Return type:**
1916+
*AggregateRequest*
1917+
1918+
#### `limit(offset, num)`
1919+
1920+
Sets the limit for the most recent group or query.
1921+
1922+
If no group has been defined yet (via group_by()) then this sets
1923+
the limit for the initial pool of results from the query. Otherwise,
1924+
this limits the number of items operated on from the previous group.
1925+
1926+
Setting a limit on the initial search results may be useful when
1927+
attempting to execute an aggregation on a sample of a large data set.
1928+
1929+
### `Parameters`
1930+
1931+
- **offset**: Result offset from which to begin paging
1932+
- **num**: Number of results to return
1933+
1934+
Example of sorting the initial results:
1935+
1936+
``
1937+
AggregateRequest("@sale_amount:[10000, inf]") .limit(0, 10) .group_by("@state", r.count())
1938+
``
1939+
1940+
Will only group by the states found in the first 10 results of the
1941+
query @sale_amount:[10000, inf]. On the other hand,
1942+
1943+
``
1944+
AggregateRequest("@sale_amount:[10000, inf]") .limit(0, 1000) .group_by("@state", r.count() .limit(0, 10)
1945+
``
1946+
1947+
Will group all the results matching the query, but only return the
1948+
first 10 groups.
1949+
1950+
If you only wish to return a *top-N* style query, consider using
1951+
sort_by() instead.
1952+
1953+
* **Parameters:**
1954+
* **offset** (*int*)
1955+
* **num** (*int*)
1956+
* **Return type:**
1957+
*AggregateRequest*
1958+
1959+
#### `load(*fields)`
1960+
1961+
Indicate the fields to be returned in the response. These fields are
1962+
returned in addition to any others implicitly specified.
1963+
1964+
### `Parameters`
1965+
1966+
- **fields**: If fields not specified, all the fields will be loaded.
1967+
1968+
Otherwise, fields should be given in the format of @field.
1969+
1970+
* **Parameters:**
1971+
**fields** (*str*)
1972+
* **Return type:**
1973+
*AggregateRequest*
1974+
1975+
#### `scorer(scorer)`
1976+
1977+
Use a different scoring function to evaluate document relevance.
1978+
Default is TFIDF.
1979+
1980+
* **Parameters:**
1981+
**scorer** (*str*) – The scoring function to use
1982+
(e.g. TFIDF.DOCNORM or BM25)
1983+
* **Return type:**
1984+
*AggregateRequest*
1985+
1986+
#### `sort_by(*fields, **kwargs)`
1987+
1988+
Indicate how the results should be sorted. This can also be used for
1989+
*top-N* style queries
1990+
1991+
### `Parameters`
1992+
1993+
- **fields**: The fields by which to sort. This can be either a single
1994+
: field or a list of fields. If you wish to specify order, you can
1995+
use the Asc or Desc wrapper classes.
1996+
- **max**: Maximum number of results to return. This can be
1997+
: used instead of LIMIT and is also faster.
1998+
1999+
Example of sorting by foo ascending and bar descending:
2000+
2001+
``
2002+
sort_by(Asc("@foo"), Desc("@bar"))
2003+
``
2004+
2005+
Return the top 10 customers:
2006+
2007+
``
2008+
AggregateRequest() .group_by("@customer", r.sum("@paid").alias(FIELDNAME)) .sort_by(Desc("@paid"), max=10)
2009+
``
2010+
2011+
* **Parameters:**
2012+
**fields** (*str*)
2013+
* **Return type:**
2014+
*AggregateRequest*
2015+
2016+
#### `with_schema()`
2017+
2018+
If set, the schema property will contain a list of [field, type]
2019+
entries in the result object.
2020+
2021+
* **Return type:**
2022+
*AggregateRequest*
2023+
2024+
#### `property params: Dict[str, Any]`
2025+
2026+
Return the parameters for the aggregation.
2027+
2028+
* **Returns:**
2029+
The parameters for the aggregation.
2030+
* **Return type:**
2031+
Dict[str, Any]
Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
---
2+
linkTitle: Vector
3+
title: Vector
4+
aliases:
5+
- /integrate/redisvl/api/vector
6+
---
7+
8+
9+
The Vector class in RedisVL is a container that encapsulates a numerical vector, it’s datatype, corresponding index field name, and optional importance weight. It is used when constructing multi-vector queries using the MultiVectorQuery class.
10+
11+
## Vector
12+
13+
### `class Vector(*, vector, field_name, dtype='float32', weight=1.0)`
14+
15+
Simple object containing the necessary arguments to perform a multi vector query.
16+
17+
Create a new model by parsing and validating input data from keyword arguments.
18+
19+
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be
20+
validated to form a valid model.
21+
22+
self is explicitly positional-only to allow self as a field name.
23+
24+
* **Parameters:**
25+
* **vector** (*List* *[* *float* *]* *|* *bytes*)
26+
* **field_name** (*str*)
27+
* **dtype** (*str*)
28+
* **weight** (*float*)
29+
30+
#### `model_config: ClassVar[ConfigDict] = {}`
31+
32+
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

0 commit comments

Comments
 (0)