Releases: zilliztech/GPTCache
Releases · zilliztech/GPTCache
v0.1.15
🎉 Introduction to new functions of GPTCache
- Add GPTCache api, makes it easier to access other different llm models and applications
from gptcache.adapter.api import put, get
from gptcache.processor.pre import get_prompt
from gptcache import cache
cache.init(pre_embedding_func=get_prompt)
put("hello", "foo")
print(get("hello"))
- Add image generation bootcamp, link: https://github.com/zilliztech/GPTCache/blob/main/docs/bootcamp/openai/image_generation.ipynb
What's Changed
- Update kreciprocal docstring for updated data store interface. by @wxywb in #225
- Add docstring for openai by @shiyu22 in #229
- Add
GPTCache api
, makes it easier to access other different llm mod… by @SimFG in #227 - Avoid Pillow installation for openai chat by @jaelgu in #230
- Add image generation bootcamp by @shiyu22 in #231
- Update docstring for similarity evaluation. by @wxywb in #232
- Reorganized the
__init__
file in the gptcache dir by @SimFG in #233 - Update the version to
0.1.15
by @SimFG in #236
Full Changelog: 0.1.14...0.1.15
v0.1.14
v0.1.13
🎉 Introduction to new functions of GPTCache
- Add openai audio adapter (experimental)
cache.init(pre_embedding_func=get_file_bytes)
openai.Audio.transcribe(
model="whisper-1",
file=audio_file
)
- Improve data eviction implementation
In the future, users will have greater flexibility to customize eviction methods, such as by using Redis or Memcached. Currently, the default caching library is cachetools, which provides an in-memory cache. Other libraries are not currently supported, but may be added in the future.
What's Changed
- Add openai audio adapter by @shiyu22 in #220
- Improve data eviction implementation by @SimFG in #221
- Update the version to
0.1.13
by @SimFG in #222
Full Changelog: 0.1.12...0.1.13
v0.1.12
What's Changed
- Add object storage by @junjiejiangjjj in #213
- Only when the eviction policy is LRU, the data access_time is updated by @SimFG in #214
- Add logger for gptcache by @shiyu22 in #216
- The llm request can customize topk search parameters by @SimFG in #217
- Scalar return emb in ndarray by @junjiejiangjjj in #218
🎉 Introduction to new functions of GPTCache
- The llm request can customize topk search parameters
openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": question},
],
top_k=10,
)
Full Changelog: 0.1.11...0.1.12
v0.1.11
What's Changed
- Add langchain examples by @shiyu22 in #196
- Enable Python syntax highlighting by @Pouyanpi in #199
- Add openai Completion adapter by @shiyu22 in #202
- Add openai bootcamp by @shiyu22 in #207
- Add openai.Image.create in adapter by @jaelgu in #208
- Refine scalar storage by @junjiejiangjjj in #205
- Improve the langchain example by @SimFG in #211
New Contributors
🎉 Introduction to new functions of GPTCache
- Add openai complete adapter
cache.init(pre_embedding_func=get_prompt)
response = openai.Completion.create(
model="text-davinci-003",
prompt=question
)
-
Add langchain and openai bootcamp
-
Add openai image adapter (experimental)
from gptcache.adapter import openai
cache.init()
cache.set_openai_key()
prompt1 = 'a cat sitting besides a dog'
size1 = '256x256'
openai.Image.create(
prompt=prompt1,
size=size1,
response_format='b64_json'
)
- Refine storage interface
Full Changelog: 0.1.10...0.1.11
v0.1.10
v0.1.9
What's Changed
- Support to import data set by @SimFG in #182
- Refine vector interface by @junjiejiangjjj in #187
- Add a pre-process function for removing the prompt info by @SimFG in #186
- Update the version to
0.1.9
by @SimFG in #189
Full Changelog: 0.1.8...0.1.9
v0.1.8
What's Changed
- Add hnswlib by @junjiejiangjjj in #167
- Update the contributing doc and remove
nop
post-process function by @SimFG in #172 - Update LangChainLLMs with LLM by @shiyu22 in #170
- Import package with utils by @shiyu22 in #173
- Remove
faiss-cpu
version to fix the python 3.10 download error by @SimFG in #174 - Update the version to
0.1.8
by @SimFG in #176
Full Changelog: 0.1.7...0.1.8
v0.1.7
v0.1.6
What's Changed
- Add more API references by @jaelgu in #128
- Update the benchmark result by @SimFG in #129
- Update docs by @shanghaikid in #130
- Update index.rst for changes in README by @jaelgu in #131
- Add workflow lint & linckcheck by @Bennu-Li in #133
- Make the cached result more similar to openAI response by @SimFG in #134
- Update the contributing doc by @SimFG in #137
- Update the test workflow by @SimFG in #146
- Rebase dev branch - 2023.4.10 by @SimFG in #162
Full Changelog: 0.1.5...0.1.6