Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Redix.ConnectionError the connection to Redis is closed #44

Open
edds opened this issue Mar 3, 2023 · 5 comments
Open

Redix.ConnectionError the connection to Redis is closed #44

edds opened this issue Mar 3, 2023 · 5 comments

Comments

@edds
Copy link

edds commented Mar 3, 2023

We're seeing quite a few exceptions due to our Redis connection timing out when using this adaptor. Is there a way of having a configuration to send regular PING commands to Redis to keep connections alive while using this adaptor.

@cabol
Copy link
Owner

cabol commented Mar 3, 2023

Hey 👋 !!

Unfortunately, there is no configuration to send a PING automatically, you have to implement it in your app, perhaps a process (GenServer) that sends PINGs from time to time, and you can use MyApp.MyRedisCache.command!("some-rand-key", ["PING"]) (BTW, the random key is to ensure all connections in the pool can send the PING).

On the other hand, the adapter uses Redix underneath, so you can use the option conn_opts to configure Redix, perhaps you can tweak the connection config (https://hexdocs.pm/redix/reconnections.html).

Let me know if that helps, I will stay tuned, thanks!!

@edds
Copy link
Author

edds commented Mar 3, 2023

Unfortunately I don't think Redix itself will help, from their docs they say:

If there are pending requests to Redix when a disconnection happens, the Redix functions will return {:error, %Redix.ConnectionError{reason: :disconnected}} to the caller. The caller is responsible to retry the request if interested.

So they are pushing the responsibility onto the consumer (which is library) to try the request again on disconnection. Redix does hold up it's end of the agreement though and we do see it reestablish the connection ready for the next cache request, which makes the timeout errors transient.

BTW, the random key is to ensure all connections in the pool can send the PING

This feels quite wasteful and why I was looking here for a solution as knowing you've hit all the connections in the pool isn't guaranteed. In a different project I worked on we wrapped the Redix connections in their own GenServer which was responsible for passing down the requests to Redix and also sending the PING if no commands came in within a timeout. I can't get in between this library and the Redix connections though and it seems a shame to have to have to implement a new Nebulex adaptor to support it.

@cabol
Copy link
Owner

cabol commented Mar 3, 2023

Yeah, I understand that but unfortunately, there is no immediate solution on the adapter itself, because the adapter doesn't have a GenServer wrapping up the connection process as you describe, it uses the Redix connection process directly, so trying to do something like we wrapped the Redix connections in their own GenServer which was responsible for passing down the requests to Redix and also sending the PING if no commands came in within a timeout will mean a big refactoring in the adapter, I'm not saying is not worth it, in fact, it makes a lot of sense, but that would be more like a mid/long term feature. That's why I suggested implementing something simpler on top of the adapter.

@edds
Copy link
Author

edds commented Mar 3, 2023

👍 thank you, I worried this might cause a big refactor at this end, but thought it was worth raising to see if it was easy. I'll look into adding something at our end to mitigate this for the moment.

@cabol
Copy link
Owner

cabol commented Mar 4, 2023

Yeah, this would be a nice feature. Since we are working on Nebulex v3, and there will be several changes and improvements in the Redis adapter regarding Redis Cluster and pool management, I will include it there, so let's keep the issue open. Thanks!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants