You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+4-2Lines changed: 4 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -42,11 +42,12 @@ RedisVL has a host of powerful features designed to streamline your vector datab
42
42
43
43
1.**Index Management**: RedisVL allows for indices to be created, updated, and deleted with ease. A schema for each index can be defined in yaml or directly in python code and used throughout the lifetime of the index.
44
44
45
-
2.**Vector Creation**: RedisVL integrates with OpenAI and other embedding providers to make the process of creating vectors straightforward.
45
+
2.**Embedding Creation**: RedisVL integrates with OpenAI and other text embedding providers to simplify the process of vectorizing unstructured data. *Image support coming soon.*
46
46
47
47
3.**Vector Search**: RedisVL provides robust search capabilities that enable you to query vectors synchronously and asynchronously. Hybrid queries that utilize tag, geographic, numeric, and other filters like full-text search are also supported.
48
48
49
-
4.**Semantic Caching**: ``LLMCache`` is a semantic caching interface built directly into RedisVL. It allows for the caching of generated output from LLM models like GPT-3 and others. As semantic search is used to check the cache, a threshold can be set to determine if the cached result is relevant enough to be returned. If not, the model is called and the result is cached for future use. This can increase the QPS and reduce the cost of using LLM models.
49
+
4.**Powerful Abstractions**
50
+
-**Semantic Caching**: `LLMCache` is a semantic caching interface built directly into RedisVL. It allows for the caching of generated output from LLMs like GPT-3 and others. As semantic search is used to check the cache, a threshold can be set to determine if the cached result is relevant enough to be returned. If not, the model is called and the result is cached for future use. This can increase the QPS and reduce the cost of using LLM models in production.
50
51
51
52
52
53
## 😊 Quick Start
@@ -125,6 +126,7 @@ The ``LLMCache`` Interface in RedisVL can be used as follows.
125
126
126
127
```python
127
128
from redisvl.llmcache.semantic import SemanticCache
Copy file name to clipboardExpand all lines: docs/examples/openai_qna.ipynb
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -504,7 +504,7 @@
504
504
"source": [
505
505
"### Embedding Creation\n",
506
506
"\n",
507
-
"With the text broken up into chunks, we can create embedding with the RedisVL OpenAIProvider. This provider uses the OpenAI API to create embeddings for the text. The code below shows how to create embeddings for the text chunks."
507
+
"With the text broken up into chunks, we can create embeddings with the RedisVL `OpenAITextVectorizer`. This provider uses the OpenAI API to create embeddings for the text. The code below shows how to create embeddings for the text chunks."
0 commit comments