Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RedisCache does't handle errors from redis. #16866

Closed
4 tasks done
dingusagar opened this issue Feb 1, 2024 · 5 comments · Fixed by #17171
Closed
4 tasks done

RedisCache does't handle errors from redis. #16866

dingusagar opened this issue Feb 1, 2024 · 5 comments · Fixed by #17171
Labels
🤖:improvement Medium size change to existing code to handle new use-cases Ɑ: models Related to LLMs or chat model modules 🔌: redis Primarily related to Redis integrations

Comments

@dingusagar
Copy link

dingusagar commented Feb 1, 2024

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.

Example Code

Same code as in the docs here

Error Message and Stack Trace (if applicable)

Traceback (most recent call last):
  File "/Users/dingusagar/inference.py", line 181, in infer
    response = self.chain.invoke(inputs)
  File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/langchain/schema/runnable/base.py", line 1213, in invoke
    input = step.invoke(
  File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/langchain/chat_models/base.py", line 142, in invoke
    self.generate_prompt(
  File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/langchain/chat_models/base.py", line 459, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
  File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/langchain/chat_models/base.py", line 349, in generate
    raise e
  File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/langchain/chat_models/base.py", line 339, in generate
    self._generate_with_cache(
  File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/langchain/chat_models/base.py", line 500, in _generate_with_cache
    cache_val = llm_cache.lookup(prompt, llm_string)
  File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/langchain/cache.py", line 392, in lookup
    results = self.redis.hgetall(self._key(prompt, llm_string))
  File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/redis/commands/core.py", line 4867, in hgetall
    return self.execute_command("HGETALL", name)
  File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/redis/client.py", line 1255, in execute_command
    conn = self.connection or pool.get_connection(command_name, **options)
  File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/redis/connection.py", line 1441, in get_connection
    connection.connect()
  File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/redis/connection.py", line 704, in connect
    raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error 61 connecting to 127.0.0.1:6379. Connection refused.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/redis/connection.py", line 698, in connect
    sock = self.retry.call_with_retry(
  File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/redis/retry.py", line 46, in call_with_retry
    return do()
  File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/redis/connection.py", line 699, in <lambda>
    lambda: self._connect(), lambda error: self.disconnect(error)
  File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/redis/connection.py", line 1089, in _connect
    sock = super()._connect()
  File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/redis/connection.py", line 987, in _connect
    raise err
  File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/redis/connection.py", line 975, in _connect
    sock.connect(socket_address)
ConnectionRefusedError: [Errno 61] Connection refused

Description

I am using RedisCache as per the docs here

Was testing how robust the system is if the redis connection fails for somereason or it goes out of memory.
Looks like if the redis connection URL is not reachable, the system throws an error.

I wanted langchain to internally handle redis failure and do a direct API calls to the LLM on failure. we can ofcourse log the error. This will make the system more robust to failures.

Does it make sense to add this feature? If so, I can help with raising a PR.

System Info

langchain==0.0.333

Copy link

dosubot bot commented Feb 1, 2024

🤖

Thank you for bringing this to our attention. It seems like the current implementation of RedisCache in LangChain does not handle Redis connection errors gracefully. When the Redis server is not reachable or goes out of memory, it throws an error instead of falling back to direct API calls to the LLM.

Your suggestion to make LangChain internally handle Redis failures and do direct API calls to the LLM on failure is a good one. This would indeed make the system more robust to failures. However, as a bot, I'm unable to create pull requests or issues.

You're welcome to raise a PR for this feature. When you do, please make sure to include tests that simulate Redis connection failures to ensure that the fallback mechanism works as expected.

For reference, you can look at the current implementation of RedisCache in the LangChain codebase. The RedisCache is implemented as a class that provides methods for updating and retrieving data from a Redis database. You can find the relevant code in the libs/langchain/tests/integration_tests/cache/test_redis_cache.py file.

Please let me know if you need any further information or assistance.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@dosubot dosubot bot added Ɑ: models Related to LLMs or chat model modules 🤖:improvement Medium size change to existing code to handle new use-cases 🔌: redis Primarily related to Redis integrations labels Feb 1, 2024
@dingusagar
Copy link
Author

The bot response is good. But would appreciate it if a real human could acknowledge and let me know if this feature request makes sense.
@sbusso @jarib @deepblue

@keenborder786
Copy link
Contributor

I don't understand exactly what exactly is failing. Because I tried invoking the llm with cache multiple times and everything is working fine at my end. Can you please post your code snipped. Also if you want to handle Redis Failure just use your custom exception.

@dingusagar
Copy link
Author

@keenborder786 , This issue doesn't happen every time. As mentioned, it happens only when the Redis server fails for some reason. If you want to replicate this, connect to a local redis server and try running the code. To simulate redis server failure, kill the server and execute the langchain code.
The code used is the same as that in the docs.

@keenborder786
Copy link
Contributor

okay. @dingusagar I guess have already created PR to resolve this issue.

eyurtsev added a commit that referenced this issue Feb 21, 2024
… AsyncRedisCache (#17171)

- **Description:**
The existing `RedisCache` implementation lacks proper handling for redis
client failures, such as `ConnectionRefusedError`, leading to subsequent
failures in pipeline components like LLM calls. This pull request aims
to improve error handling for redis client issues, ensuring a more
robust and graceful handling of such errors.

  - **Issue:**  Fixes #16866
  - **Dependencies:** No new dependency
  - **Twitter handle:** N/A

Co-authored-by: snsten <>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
k8si pushed a commit to Mozilla-Ocho/langchain that referenced this issue Feb 22, 2024
… AsyncRedisCache (langchain-ai#17171)

- **Description:**
The existing `RedisCache` implementation lacks proper handling for redis
client failures, such as `ConnectionRefusedError`, leading to subsequent
failures in pipeline components like LLM calls. This pull request aims
to improve error handling for redis client issues, ensuring a more
robust and graceful handling of such errors.

  - **Issue:**  Fixes langchain-ai#16866
  - **Dependencies:** No new dependency
  - **Twitter handle:** N/A

Co-authored-by: snsten <>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
al1p pushed a commit to al1p/langchain that referenced this issue Feb 27, 2024
… AsyncRedisCache (langchain-ai#17171)

- **Description:**
The existing `RedisCache` implementation lacks proper handling for redis
client failures, such as `ConnectionRefusedError`, leading to subsequent
failures in pipeline components like LLM calls. This pull request aims
to improve error handling for redis client issues, ensuring a more
robust and graceful handling of such errors.

  - **Issue:**  Fixes langchain-ai#16866
  - **Dependencies:** No new dependency
  - **Twitter handle:** N/A

Co-authored-by: snsten <>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
haydeniw pushed a commit to haydeniw/langchain that referenced this issue Feb 27, 2024
… AsyncRedisCache (langchain-ai#17171)

- **Description:**
The existing `RedisCache` implementation lacks proper handling for redis
client failures, such as `ConnectionRefusedError`, leading to subsequent
failures in pipeline components like LLM calls. This pull request aims
to improve error handling for redis client issues, ensuring a more
robust and graceful handling of such errors.

  - **Issue:**  Fixes langchain-ai#16866
  - **Dependencies:** No new dependency
  - **Twitter handle:** N/A

Co-authored-by: snsten <>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:improvement Medium size change to existing code to handle new use-cases Ɑ: models Related to LLMs or chat model modules 🔌: redis Primarily related to Redis integrations
Projects
None yet
2 participants