Skip to content

Releases: zilliztech/GPTCache

v0.1.24

15 May 14:10
dd0f16f
Compare
Choose a tag to compare

🎉 Introduction to new functions of GPTCache

  1. Support the langchain embedding
from gptcache.embedding import LangChain
from langchain.embeddings.openai import OpenAIEmbeddings

test_sentence = 'Hello, world.'
embeddings = OpenAIEmbeddings(model="your-embeddings-deployment-name")
encoder = LangChain(embeddings=embeddings)
embed = encoder.to_embeddings(test_sentence)
  1. Add gptcache client
from gptcache import Client

client = Client()
client.put("Hi", "Hi back")
ans = client.get("Hi")
  1. Support pgvector as vector store
from gptcache.manager import manager_factory

data_manager = manager_factory("sqlite,pgvector", vector_params={"dimension": 10})
  1. Add the GPTCache server doc

reference: https://github.com/zilliztech/GPTCache/blob/main/docs/usage.md#Build-GPTCache-server

What's Changed

Full Changelog: 0.1.23...0.1.24

v0.1.23

11 May 14:30
Compare
Choose a tag to compare

🎉 Introduction to new functions of GPTCache

  1. Support the session for the LangChainLLMs
from langchain import OpenAI
from gptcache.adapter.langchain_models import LangChainLLMs
from gptcache.session import Session

session = Session(name="sqlite-example")
llm = LangChainLLMs(llm=OpenAI(temperature=0), session=session)
  1. Optimize the summarization context process
from gptcache import cache
from gptcache.processor.context.summarization_context import SummarizationContextProcess

context_process = SummarizationContextProcess()
cache.init(
    pre_embedding_func=context_process.pre_process,
)
  1. Add BabyAGI bootcamp

details: https://github.com/zilliztech/GPTCache/blob/main/docs/bootcamp/langchain/baby_agi.ipynb

What's Changed

Full Changelog: 0.1.22...0.1.23

v0.1.22

07 May 06:43
Compare
Choose a tag to compare

🎉 Introduction to new functions of GPTCache

  1. Process the dialog context through the context processing interface, which currently supports two ways: summarize and selective context
import transformers
from gptcache.processor.context.summarization_context import SummarizationContextProcess
from gptcache.processor.context.selective_context import SelectiveContextProcess
from gptcache import cache

summarizer = transformers.pipeline("summarization", model="facebook/bart-large-cnn")
context_process = SummarizationContextProcess(summarizer, None, 512)
cache.init(
    pre_embedding_func=context_process.pre_process,
)

context_processor = SelectiveContextProcess()
cache.init(
    pre_embedding_func=context_process.pre_process,
)

What's Changed

Full Changelog: 0.1.21...0.1.22

v0.1.21

29 Apr 06:24
6a1e2e8
Compare
Choose a tag to compare

🎉 Introduction to new functions of GPTCache

  1. Support the temperature param
from gptcache.adapter import openai

openai.ChatCompletion.create(
    model="gpt-3.5-turbo",
    temperature = 1.0,  # Change temperature here
    messages=[{
        "role": "user",
        "content": question
    }],
)
  1. Add the session layer
from gptcache.adapter import openai
from gptcache.session import Session

session = Session(name="my-session")
question = "what do you think about chatgpt"
openai.ChatCompletion.create(
    model="gpt-3.5-turbo",
    messages=[
        {"role": "user", "content": question}
    ],
    session=session
)

details: https://github.com/zilliztech/GPTCache/tree/main/examples#How-to-run-with-session

  1. Support config cache with yaml for server
from gptcache.adapter.api import init_similar_cache_from_config

init_similar_cache_from_config(config_dir="cache_config_template.yml")

config file template: https://github.com/zilliztech/GPTCache/blob/main/cache_config_template.yml

  1. Adapt the dolly model
from gptcache.adapter.dolly import Dolly

llm = Dolly.from_model(model="databricks/dolly-v2-3b")
llm(question)

What's Changed

Full Changelog: 0.1.20...0.1.21

v0.1.20

26 Apr 15:11
268e32c
Compare
Choose a tag to compare

🎉 Introduction to new functions of GPTCache

  1. support the temperature param, like openai

A non-negative number of sampling temperature, defaults to 0.
A higher temperature makes the output more random.
A lower temperature means a more deterministic and confident output.

  1. Add llama adapter
from gptcache.adapter.llama_cpp import Llama

llm = Llama('./models/7B/ggml-model.bin')
answer = llm(prompt=question)

Full Changelog: 0.1.19...0.1.20

v0.1.19

24 Apr 14:50
f3406ee
Compare
Choose a tag to compare

🎉 Introduction to new functions of GPTCache

  1. Add stability sdk adapter (text -> image)
import os
import time

from gptcache import cache
from gptcache.processor.pre import get_prompt
from gptcache.adapter.stability_sdk import StabilityInference, generation
from gptcache.embedding import Onnx
from gptcache.manager.factory import manager_factory
from gptcache.similarity_evaluation.distance import SearchDistanceEvaluation

# init gptcache
onnx = Onnx()
data_manager = manager_factory('sqlite,faiss,local', 
                               data_dir='./', 
                               vector_params={'dimension': onnx.dimension},
                               object_params={'path': './images'}
                               )
cache.init(
    pre_embedding_func=get_prompt,
    embedding_func=onnx.to_embeddings,
    data_manager=data_manager,
    similarity_evaluation=SearchDistanceEvaluation()
    )

api_key = os.getenv('STABILITY_KEY', 'key-goes-here')

stability_api = StabilityInference(
    key=os.environ['STABILITY_KEY'], # API Key reference.
    verbose=False, # Print debug messages.
    engine='stable-diffusion-xl-beta-v2-2-2', # Set the engine to use for generation.
)

start = time.time()
answers = stability_api.generate(
    prompt='a cat sitting besides a dog',
    width=256,
    height=256
    )

stability reference: https://platform.stability.ai/docs/features/text-to-image

  1. Add minigpt4 adapter

Notice: It cannot be used directly, it needs to cooperate with mini-GPT4 source code, refer to: Vision-CAIR/MiniGPT-4#136

What's Changed

  • Unify the format of manager variable names in manager_factory method by @SimFG in #276
  • Adapt stability_sdk by @jaelgu in #277
  • Add minigpt4 adapter by @shiyu22 in #274
  • Update docs by @jaelgu in #278
  • Make np evaluation positively correlated with the similarity. by @wxywb in #280
  • Add temperature_softmax in post processor by @jaelgu in #282
  • Update the version to 0.1.19 by @SimFG in #283

Full Changelog: 0.1.18...0.1.19

v0.1.18

23 Apr 14:27
745aa6e
Compare
Choose a tag to compare

🎉 Introduction to new functions of GPTCache

  1. Add vqa bootcamp

reference: https://github.com/zilliztech/GPTCache/tree/main/docs/bootcamp/replicate

  1. Add two streamlit multimodal demos

reference: https://github.com/zilliztech/GPTCache/tree/main/docs/bootcamp/streamlit

  1. Add vit image embedding func
from gptcache.embedding import ViT

encoder = ViT(model="google/vit-base-patch16-384")
embed = encoder.to_embeddings(image)
  1. Add init_similar_cache func for the GPTCache api module
from gptcache.adapter.api import init_similar_cache

init_similar_cache("cache_data")
  1. The simple GPTCache server provides similar cache
  • clone the GPTCache repo, git clone https://github.com/zilliztech/GPTCache.git
  • install the gptcache model, pip install gptcache
  • run the GPTCache server, cd gptcache_server && python server.py

What's Changed

Full Changelog: 0.1.17...0.1.18

v0.1.17

20 Apr 15:41
a366e43
Compare
Choose a tag to compare

🎉 Introduction to new functions of GPTCache

  1. Add image embedding timm
import requests
from PIL import Image
from gptcache.embedding import Timm

url = 'https://raw.githubusercontent.com/zilliztech/GPTCache/main/docs/GPTCache.png'
image = Image.open(requests.get(url, stream=True).raw)  # Read image url as PIL.Image      
encoder = Timm(model='resnet18')
image_tensor = encoder.preprocess(image)
embed = encoder.to_embeddings(image_tensor)
  1. Add Replicate adapter, vqa (visual question answering) (experimental)
from gptcache.adapter import replicate

question = "what is in the image?"

replicate.run(
    "andreasjansson/blip-2:xxx",
    input={
        "image": open(image_path, 'rb'),
        "question": question
    }
)
  1. Support to flush data for preventing accidental loss of memory data
from gptcache import cache

cache.flush()

What's Changed

New Contributors

Full Changelog: 0.1.16...0.1.17

v0.1.16

19 Apr 16:08
252de14
Compare
Choose a tag to compare

🎉 Introduction to new functions of GPTCache

  1. Add StableDiffusion adapter (experimental)
import torch

from gptcache.adapter.diffusers import StableDiffusionPipeline
from gptcache.processor.pre import get_prompt
from gptcache import cache

cache.init(
    pre_embedding_func=get_prompt,
)
model_id = "stabilityai/stable-diffusion-2-1"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)

prompt = "a photo of an astronaut riding a horse on mars"
pipe(prompt=prompt).images[0]
  1. Add speech to text bootcamp, link

  2. More convenient management of cache files

from gptcache.manager.factory import manager_factory

data_manager = manager_factory('sqlite,faiss', data_dir="test_cache", vector_params={"dimension": 5})
  1. Add a simple GPTCache server (experimental)

After starting this server, you can:

  • put the data to cache, like: curl -X PUT -d "receive a hello message" "http://localhost:8000?prompt=hello"
  • get the data from cache, like: curl -X GET "http://localhost:8000?prompt=hello"

Currently the service is just a map cache, more functions are still under development.

What's Changed

Full Changelog: 0.1.15...0.1.16

v0.1.15

18 Apr 16:11
6c635a6
Compare
Choose a tag to compare

🎉 Introduction to new functions of GPTCache

  1. Add GPTCache api, makes it easier to access other different llm models and applications
from gptcache.adapter.api import put, get
from gptcache.processor.pre import get_prompt
from gptcache import cache

cache.init(pre_embedding_func=get_prompt)
put("hello", "foo")
print(get("hello"))
  1. Add image generation bootcamp, link: https://github.com/zilliztech/GPTCache/blob/main/docs/bootcamp/openai/image_generation.ipynb

What's Changed

  • Update kreciprocal docstring for updated data store interface. by @wxywb in #225
  • Add docstring for openai by @shiyu22 in #229
  • Add GPTCache api, makes it easier to access other different llm mod… by @SimFG in #227
  • Avoid Pillow installation for openai chat by @jaelgu in #230
  • Add image generation bootcamp by @shiyu22 in #231
  • Update docstring for similarity evaluation. by @wxywb in #232
  • Reorganized the __init__ file in the gptcache dir by @SimFG in #233
  • Update the version to 0.1.15 by @SimFG in #236

Full Changelog: 0.1.14...0.1.15