Skip to content

Releases: zilliztech/GPTCache

v0.1.15

18 Apr 16:11
6c635a6
Compare
Choose a tag to compare

🎉 Introduction to new functions of GPTCache

  1. Add GPTCache api, makes it easier to access other different llm models and applications
from gptcache.adapter.api import put, get
from gptcache.processor.pre import get_prompt
from gptcache import cache

cache.init(pre_embedding_func=get_prompt)
put("hello", "foo")
print(get("hello"))
  1. Add image generation bootcamp, link: https://github.com/zilliztech/GPTCache/blob/main/docs/bootcamp/openai/image_generation.ipynb

What's Changed

  • Update kreciprocal docstring for updated data store interface. by @wxywb in #225
  • Add docstring for openai by @shiyu22 in #229
  • Add GPTCache api, makes it easier to access other different llm mod… by @SimFG in #227
  • Avoid Pillow installation for openai chat by @jaelgu in #230
  • Add image generation bootcamp by @shiyu22 in #231
  • Update docstring for similarity evaluation. by @wxywb in #232
  • Reorganized the __init__ file in the gptcache dir by @SimFG in #233
  • Update the version to 0.1.15 by @SimFG in #236

Full Changelog: 0.1.14...0.1.15

v0.1.14

17 Apr 22:54
eb8ac29
Compare
Choose a tag to compare

What's Changed

  • Fix to fail to save the data to cache by @SimFG in #224

Full Changelog: 0.1.13...0.1.14

v0.1.13

17 Apr 16:45
4dddcf0
Compare
Choose a tag to compare

🎉 Introduction to new functions of GPTCache

  1. Add openai audio adapter (experimental)
cache.init(pre_embedding_func=get_file_bytes)

openai.Audio.transcribe(
    model="whisper-1",
    file=audio_file
)
  1. Improve data eviction implementation

In the future, users will have greater flexibility to customize eviction methods, such as by using Redis or Memcached. Currently, the default caching library is cachetools, which provides an in-memory cache. Other libraries are not currently supported, but may be added in the future.

What's Changed

Full Changelog: 0.1.12...0.1.13

v0.1.12

17 Apr 08:08
Compare
Choose a tag to compare

What's Changed

🎉 Introduction to new functions of GPTCache

  1. The llm request can customize topk search parameters
openai.ChatCompletion.create(
    model="gpt-3.5-turbo",
    messages=[
        {"role": "user", "content": question},
    ],
    top_k=10,
)

Full Changelog: 0.1.11...0.1.12

v0.1.11

14 Apr 16:46
Compare
Choose a tag to compare

What's Changed

New Contributors

🎉 Introduction to new functions of GPTCache

  1. Add openai complete adapter
cache.init(pre_embedding_func=get_prompt)

response = openai.Completion.create(
                model="text-davinci-003",
                prompt=question
            )
  1. Add langchain and openai bootcamp

  2. Add openai image adapter (experimental)

from gptcache.adapter import openai

cache.init()
cache.set_openai_key()

prompt1 = 'a cat sitting besides a dog'
size1 = '256x256'

openai.Image.create(
    prompt=prompt1,
    size=size1,
    response_format='b64_json'
    )
  1. Refine storage interface

Full Changelog: 0.1.10...0.1.11

v0.1.10

13 Apr 15:42
44cd553
Compare
Choose a tag to compare

What's Changed

Full Changelog: 0.1.9...0.1.10

v0.1.9

12 Apr 15:06
8ca703e
Compare
Choose a tag to compare

What's Changed

Full Changelog: 0.1.8...0.1.9

v0.1.8

11 Apr 15:25
0d16566
Compare
Choose a tag to compare

What's Changed

Full Changelog: 0.1.7...0.1.8

v0.1.7

10 Apr 15:00
4f74e5f
Compare
Choose a tag to compare

What's Changed

  • Fix the cache_skip param don't take effect by @SimFG in #168

Full Changelog: 0.1.5...0.1.7

v0.1.6

10 Apr 10:19
Compare
Choose a tag to compare

What's Changed

Full Changelog: 0.1.5...0.1.6