From cb1880eead9236951e0d45b0fde871f871f622bc Mon Sep 17 00:00:00 2001 From: Brian Sam-Bodden Date: Wed, 1 Oct 2025 06:52:17 -0700 Subject: [PATCH] fix: fix title/heading on the notebook --- docs/user_guide/03_llmcache.ipynb | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/docs/user_guide/03_llmcache.ipynb b/docs/user_guide/03_llmcache.ipynb index cd60298b..a33c5c33 100644 --- a/docs/user_guide/03_llmcache.ipynb +++ b/docs/user_guide/03_llmcache.ipynb @@ -1,5 +1,10 @@ { "cells": [ + { + "cell_type": "markdown", + "source": "# LLM Caching\n\nThis notebook demonstrates how to use RedisVL's `SemanticCache` to cache LLM responses based on semantic similarity. Semantic caching can significantly reduce API costs and latency by retrieving cached responses for semantically similar prompts instead of making redundant API calls.\n\nKey features covered:\n- Basic cache operations (store, check, clear)\n- Customizing semantic similarity thresholds\n- TTL policies for cache expiration\n- Performance benchmarking\n- Access controls with tags and filters for multi-user scenarios\n\nPrerequisites:\n- Ensure `redisvl` is installed in your Python environment\n- Have a running instance of [Redis Stack](https://redis.io/docs/install/install-stack/) or [Redis Cloud](https://redis.io/cloud)\n- OpenAI API key for the examples", + "metadata": {} + }, { "cell_type": "markdown", "metadata": {}, @@ -925,4 +930,4 @@ }, "nbformat": 4, "nbformat_minor": 2 -} +} \ No newline at end of file