Skip to content

v0.1.26

Compare
Choose a tag to compare
@SimFG SimFG released this 23 May 13:34
· 92 commits to main since this release

馃帀 Introduction to new functions of GPTCache

  1. Support the paddlenlp embedding @vax521
from gptcache.embedding import PaddleNLP

test_sentence = 'Hello, world.'
encoder = PaddleNLP(model='ernie-3.0-medium-zh')
embed = encoder.to_embeddings(test_sentence)
  1. Support the openai Moderation api
from gptcache.adapter import openai
from gptcache.adapter.api import init_similar_cache
from gptcache.processor.pre import get_openai_moderation_input

init_similar_cache(pre_func=get_openai_moderation_input)
openai.Moderation.create(
    input="hello, world",
)
  1. Add the llama_index bootcamp, through which you can learn how GPTCache works with llama index

details: WebPage QA

What's Changed

  • Replace summarization test model. by @wxywb in #368
  • Add the llama index bootcamp by @SimFG in #371
  • Update the llama index example url by @SimFG in #372
  • Support the openai moderation adapter by @SimFG in #376
  • Paddlenlp embedding support by @SimFG in #377
  • Update the cache config template file and example directory by @SimFG in #380

Full Changelog: 0.1.25...0.1.26