Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
58 changes: 29 additions & 29 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,15 +42,15 @@ Indices can be defined through yaml specification that corresponds directly to t

```yaml
index:
name: users
name: user_index
storage_type: hash
prefix: "user:"
key_field: "id"
prefix: users
key_field: user

fields:
# define tag fields
tag:
- name: users
- name: user
- name: job
- name: credit_store
# define numeric fields
Expand All @@ -65,7 +65,7 @@ fields:

This would correspond to a dataset that looked something like

| users | age | job | credit_score | user_embedding |
| user | age | job | credit_score | user_embedding |
|-------|-----|------------|--------------|-----------------------------------|
| john | 1 | engineer | high | \x3f\x8c\xcc\x3f\x8c\xcc?@ |
| mary | 2 | doctor | low | \x3f\x8c\xcc\x3f\x8c\xcc?@ |
Expand All @@ -74,6 +74,8 @@ This would correspond to a dataset that looked something like

With the schema, the RedisVL library can be used to create, load vectors and perform vector searches
```python
import pandas as pd

from redisvl.index import SearchIndex
from redisvl.query import create_vector_query

Expand All @@ -82,47 +84,45 @@ index = SearchIndex.from_yaml("./users_schema.yml"))
index.connect("redis://localhost:6379")
index.create()

index.load(pd.read_csv("./users.csv").to_records())
index.load(pd.read_csv("./users.csv").to_dict("records"))

query = create_vector_query(
["users", "age", "job", "credit_score"],
["user", "age", "job", "credit_score"],
number_of_results=2,
vector_field_name="user_embedding",
)

query_vector = np.array([0.1, 0.1, 0.5]).tobytes()
results = index.search(query, query_params={"vector": query_vector})


```

### Semantic cache

The ``LLMCache`` Interface in RedisVL can be used as follows.

```python
# init open ai client
import openai
openai.api_key = "sk-xxx"

from redisvl.llmcache.semantic import SemanticCache
cache = SemanticCache(redis_host="localhost", redis_port=6379, redis_password=None)

def ask_gpt3(question):
response = openai.Completion.create(
engine="text-davinci-003",
prompt=question,
max_tokens=100
)
return response.choices[0].text.strip()

def answer_question(question: str):
results = cache.check(question)
if results:
return results[0]
else:
answer = ask_gpt3(question)
cache.store(question, answer)
return answer
cache = SemanticCache(
redis_url="redis://localhost:6379",
threshold=0.9, # semantic similarity threshold
)

# check if the cache has a result for a given query
cache.check("What is the capital of France?")
[ ]

# store a result for a given query
cache.store("What is the capital of France?", "Paris")

# Cache will now have the query
cache.check("What is the capital of France?")
["Paris"]

# Cache will return the result if the query is similar enough
cache.get("What really is the capital of France?")
["Paris"]
```


11 changes: 11 additions & 0 deletions conftest.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
import os
import pytest
import asyncio

from redisvl.utils.connection import (
get_async_redis_connection,
Expand All @@ -23,3 +24,13 @@ def client():
@pytest.fixture
def openai_key():
return os.getenv("OPENAI_KEY")


@pytest.fixture(scope="session")
def event_loop():
try:
loop = asyncio.get_running_loop()
except RuntimeError:
loop = asyncio.new_event_loop()
yield loop
loop.close()
3 changes: 2 additions & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -102,6 +102,7 @@
add_module_names = False

nbsphinx_execute = 'never'
jupyter_execute_notebooks = "off"

# -- Options for autosummary/autodoc output ------------------------------------
autosummary_generate = True
Expand Down Expand Up @@ -129,4 +130,4 @@
"android-chrome-192x192.png",
# apple icons
{"rel": "apple-touch-icon", "href": "apple-touch-icon.png"},
]
]
2 changes: 1 addition & 1 deletion docs/user_guide/getting_started_01.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
"4. Performing queries\n",
"\n",
"Before running this notebook, be sure to\n",
"1. Gave installed ``rvl`` and have that environment active for this notebook.\n",
"1. Have installed ``redisvl`` and have that environment active for this notebook.\n",
"2. Have a running Redis instance with RediSearch > 2.4 running."
]
},
Expand Down
3 changes: 1 addition & 2 deletions docs/user_guide/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,7 @@ embedding_creation
```

```{toctree}
:maxdepth: 2
:caption: LLMCache

llm_cache
llmcache_03
```
10 changes: 0 additions & 10 deletions docs/user_guide/llm_cache.rst

This file was deleted.

Loading