From 601e6cf2d21c4cec3acd799adf9aa9ab85eb9822 Mon Sep 17 00:00:00 2001 From: Kyle Banker <11589+banker@users.noreply.github.com> Date: Fri, 5 Sep 2025 14:06:27 -0600 Subject: [PATCH] Fix capitalization and improve cache management description --- content/develop/ai/langcache/_index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/develop/ai/langcache/_index.md b/content/develop/ai/langcache/_index.md index 3943cd0a5f..b94af528cf 100644 --- a/content/develop/ai/langcache/_index.md +++ b/content/develop/ai/langcache/_index.md @@ -33,8 +33,8 @@ Using LangCache as a semantic caching service has the following benefits: - **Lower LLM costs**: Reduce costly LLM calls by easily storing the most frequently-requested responses. - **Faster AI app responses**: Get faster AI responses by retrieving previously-stored requests from memory. -- **Simpler Deployments**: Access our managed service using a REST API with automated embedding generation, configurable controls, and no database management required. -- **Advanced cache management**: Manage data access and privacy, eviction protocols, and monitor usage and cache hit rates. +- **Simpler deployments**: Access our managed service using a REST API with automated embedding generation, configurable controls, and no database management required. +- **Advanced cache management**: Manage data access, privacy, and eviction protocols. Monitor usage and cache hit rates. LangCache works well for the following use cases: