Skip to content

Releases: zilliztech/GPTCache

v0.1.14

17 Apr 22:54
eb8ac29
Compare
Choose a tag to compare

What's Changed

  • Fix to fail to save the data to cache by @SimFG in #224

Full Changelog: 0.1.13...0.1.14

v0.1.13

17 Apr 16:45
4dddcf0
Compare
Choose a tag to compare

🎉 Introduction to new functions of GPTCache

  1. Add openai audio adapter (experimental)
cache.init(pre_embedding_func=get_file_bytes)

openai.Audio.transcribe(
    model="whisper-1",
    file=audio_file
)
  1. Improve data eviction implementation

In the future, users will have greater flexibility to customize eviction methods, such as by using Redis or Memcached. Currently, the default caching library is cachetools, which provides an in-memory cache. Other libraries are not currently supported, but may be added in the future.

What's Changed

Full Changelog: 0.1.12...0.1.13

v0.1.12

17 Apr 08:08
Compare
Choose a tag to compare

What's Changed

🎉 Introduction to new functions of GPTCache

  1. The llm request can customize topk search parameters
openai.ChatCompletion.create(
    model="gpt-3.5-turbo",
    messages=[
        {"role": "user", "content": question},
    ],
    top_k=10,
)

Full Changelog: 0.1.11...0.1.12

v0.1.11

14 Apr 16:46
Compare
Choose a tag to compare

What's Changed

New Contributors

🎉 Introduction to new functions of GPTCache

  1. Add openai complete adapter
cache.init(pre_embedding_func=get_prompt)

response = openai.Completion.create(
                model="text-davinci-003",
                prompt=question
            )
  1. Add langchain and openai bootcamp

  2. Add openai image adapter (experimental)

from gptcache.adapter import openai

cache.init()
cache.set_openai_key()

prompt1 = 'a cat sitting besides a dog'
size1 = '256x256'

openai.Image.create(
    prompt=prompt1,
    size=size1,
    response_format='b64_json'
    )
  1. Refine storage interface

Full Changelog: 0.1.10...0.1.11

v0.1.10

13 Apr 15:42
44cd553
Compare
Choose a tag to compare

What's Changed

Full Changelog: 0.1.9...0.1.10

v0.1.9

12 Apr 15:06
8ca703e
Compare
Choose a tag to compare

What's Changed

Full Changelog: 0.1.8...0.1.9

v0.1.8

11 Apr 15:25
0d16566
Compare
Choose a tag to compare

What's Changed

Full Changelog: 0.1.7...0.1.8

v0.1.7

10 Apr 15:00
4f74e5f
Compare
Choose a tag to compare

What's Changed

  • Fix the cache_skip param don't take effect by @SimFG in #168

Full Changelog: 0.1.5...0.1.7

v0.1.6

10 Apr 10:19
Compare
Choose a tag to compare

What's Changed

Full Changelog: 0.1.5...0.1.6

v0.1.5

06 Apr 15:01
e940a2e
Compare
Choose a tag to compare

🎉 GPTCache officially released the first version.

Introduction

GPTCache is a library for creating semantic cache to store responses from LLM queries.

What's Supported

  • LLM Adapter
    • Support OpenAI ChatGPT API
    • Support langchain
  • Embedding
    • Disable embedding. This will turn GPTCache into a keyword-matching cache
    • Support OpenAI embedding API
    • Support ONNX with the GPTCache/paraphrase-albert-onnx model
    • Support Hugging Face embedding API
    • Support Cohere embedding API
    • Support fastText embedding API
    • Support SentenceTransformers embedding API
  • Cache Storage
    • Support SQLite
    • Support PostgreSQL
    • Support MySQL
    • Support MariaDB
    • Support SQL Server
    • Support Oracle
  • Vector Store
    • Support Milvus
    • Support Zilliz Cloud
    • Support FAISS
  • Similarity Evaluator
    • The distance we obtain from the Vector Store
    • A model-based similarity determined using the GPTCache/albert-duplicate-onnx model from ONNX
    • Exact matches between the input request and the requests obtained from the Vector Store
    • Distance represented by applying linalg.norm from numpy to the embeddings

Full Changelog: https://github.com/zilliztech/GPTCache/commits/0.1.5