Skip to content

Add detailed recipe for RedisVL<>LangCache integration #114

@tylerhutcherson

Description

@tylerhutcherson

In version 0.11.0, @abrookins added support for LangCache within RedisVL as another LLMCache interface implementation. The benefit here is that for users already plugged into the RedisVL ecosystem, we can make it easy to transition to the fully managed service and even add value-add features around the Redis product to enhance the user experience.

To support a customer workshop on 11/20/25, we need an end-to-end guide on using LangCache (RedisVL wrapper) and semantic caching techniques in general. This also is something that has been long-needed anyways and will provide value well-beyond this current initiative.

Proposed recipe flow:

  • Environment setup
    • Package installation and import
    • LangCache setup steps (direct to cloud signup link and make sure they add the right attributes to the cache and also get the right IDs copied)
    • LangCache and Redis environment variables
    • OpenAI environment variables
  • Load datasets
    • Knowledge base of PDF data (single PDF broken into chunks like typical RAG setup)
    • Simple FAQ dataset (mayeb derived from the above using an LLM ala doc2cache technique)
    • Test/eval dataset we can use to tune our cache behavior/performance
  • Pre load semantic cache with FAQs (use write APIs and include attribute tags for optional filtering later)
  • Test retrieving from the cache using different matching strategies and distance thresholds
  • Tune the cache threshold using the test data and show improved performance
  • Integrate the langcache-embed cross encoder model from huggingface
  • Put into context of a simple RAG chain and show impact on performance
  • Cleanup

Metadata

Metadata

Labels

documentationImprovements or additions to documentationenhancementNew feature or request

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions