-
Notifications
You must be signed in to change notification settings - Fork 13.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RedisCache does't handle errors from redis. #16866
Comments
🤖 Thank you for bringing this to our attention. It seems like the current implementation of RedisCache in LangChain does not handle Redis connection errors gracefully. When the Redis server is not reachable or goes out of memory, it throws an error instead of falling back to direct API calls to the LLM. Your suggestion to make LangChain internally handle Redis failures and do direct API calls to the LLM on failure is a good one. This would indeed make the system more robust to failures. However, as a bot, I'm unable to create pull requests or issues. You're welcome to raise a PR for this feature. When you do, please make sure to include tests that simulate Redis connection failures to ensure that the fallback mechanism works as expected. For reference, you can look at the current implementation of RedisCache in the LangChain codebase. The RedisCache is implemented as a class that provides methods for updating and retrieving data from a Redis database. You can find the relevant code in the libs/langchain/tests/integration_tests/cache/test_redis_cache.py file. Please let me know if you need any further information or assistance. SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
I don't understand exactly what exactly is failing. Because I tried invoking the llm with cache multiple times and everything is working fine at my end. Can you please post your code snipped. Also if you want to handle Redis Failure just use your custom exception. |
@keenborder786 , This issue doesn't happen every time. As mentioned, it happens only when the Redis server fails for some reason. If you want to replicate this, connect to a local redis server and try running the code. To simulate redis server failure, kill the server and execute the langchain code. |
okay. @dingusagar I guess have already created PR to resolve this issue. |
… AsyncRedisCache (#17171) - **Description:** The existing `RedisCache` implementation lacks proper handling for redis client failures, such as `ConnectionRefusedError`, leading to subsequent failures in pipeline components like LLM calls. This pull request aims to improve error handling for redis client issues, ensuring a more robust and graceful handling of such errors. - **Issue:** Fixes #16866 - **Dependencies:** No new dependency - **Twitter handle:** N/A Co-authored-by: snsten <> Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
… AsyncRedisCache (langchain-ai#17171) - **Description:** The existing `RedisCache` implementation lacks proper handling for redis client failures, such as `ConnectionRefusedError`, leading to subsequent failures in pipeline components like LLM calls. This pull request aims to improve error handling for redis client issues, ensuring a more robust and graceful handling of such errors. - **Issue:** Fixes langchain-ai#16866 - **Dependencies:** No new dependency - **Twitter handle:** N/A Co-authored-by: snsten <> Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
… AsyncRedisCache (langchain-ai#17171) - **Description:** The existing `RedisCache` implementation lacks proper handling for redis client failures, such as `ConnectionRefusedError`, leading to subsequent failures in pipeline components like LLM calls. This pull request aims to improve error handling for redis client issues, ensuring a more robust and graceful handling of such errors. - **Issue:** Fixes langchain-ai#16866 - **Dependencies:** No new dependency - **Twitter handle:** N/A Co-authored-by: snsten <> Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
… AsyncRedisCache (langchain-ai#17171) - **Description:** The existing `RedisCache` implementation lacks proper handling for redis client failures, such as `ConnectionRefusedError`, leading to subsequent failures in pipeline components like LLM calls. This pull request aims to improve error handling for redis client issues, ensuring a more robust and graceful handling of such errors. - **Issue:** Fixes langchain-ai#16866 - **Dependencies:** No new dependency - **Twitter handle:** N/A Co-authored-by: snsten <> Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Checked other resources
Example Code
Same code as in the docs here
Error Message and Stack Trace (if applicable)
Description
I am using RedisCache as per the docs here
Was testing how robust the system is if the redis connection fails for somereason or it goes out of memory.
Looks like if the redis connection URL is not reachable, the system throws an error.
I wanted langchain to internally handle redis failure and do a direct API calls to the LLM on failure. we can ofcourse log the error. This will make the system more robust to failures.
Does it make sense to add this feature? If so, I can help with raising a PR.
System Info
langchain==0.0.333
The text was updated successfully, but these errors were encountered: