Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Connection to Redis Cluster in Minikube #21

Closed
ConstBur opened this issue Dec 27, 2021 · 7 comments
Closed

Connection to Redis Cluster in Minikube #21

ConstBur opened this issue Dec 27, 2021 · 7 comments
Labels
good first issue Good for newcomers

Comments

@ConstBur
Copy link

Hello Alec, I have a problem connecting to a local Redis cluster created in Minikube. If the problem isn't on Fred's side let me know :)

The setup is relatively simple: there's a single LoadBalancer exposed via minikube tunnel that redirects to a cluster node, and the cluster itself isn't accessible from outside (so the nodes can only talk to each other and the load balancer). Here's the commands I used:

Using redis-cli on the host with load balancer IP 10.102.101.213 like so:

redis-cli -c -h 10.102.101.213 -a $REDIS_PASSWORD CLUSTER NODES
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
aceb5bf17a706440a297c5be764f4e4b6a60eb61 172.17.0.7:6379@16379 slave 16da11b0423cafe75261b5adbc2eb13a90358cc5 0 1640566876716 2 connected
a67bf5b0d784c9be05fba218bdc456b484f37c86 172.17.0.2:6379@16379 myself,slave 2008db068b8a58a28595b37060d52d3736d04a6d 0 1640566873000 1 connected
a10e1fe9800e7fb2a376f5ec5e1734658b15c6c9 172.17.0.5:6379@16379 master - 0 1640566875000 3 connected 10923-16383
2008db068b8a58a28595b37060d52d3736d04a6d 172.17.0.6:6379@16379 master - 0 1640566874000 1 connected 0-5460
53cf7c4159f5896deb1c42478c82a6404ac5f56e 172.17.0.3:6379@16379 slave a10e1fe9800e7fb2a376f5ec5e1734658b15c6c9 0 1640566876000 3 connected
16da11b0423cafe75261b5adbc2eb13a90358cc5 :0@0 master,noaddr - 1640566845731 1640566845730 2 disconnected 5461-10922

all works as expected.

Trying to connect to the same cluster with Fred (env variable REDIS_URI is the LoadBalancer IP):

let config = RedisConfig {
            server: ServerConfig::new_clustered([(dotenv!("REDIS_URI").to_string(), 6379 as u16)].to_vec()),
            fail_fast: true,
            pipeline: true,
            blocking: Blocking::Block,
            username: None,
            password: Some(dotenv!("REDIS_PASSWD").to_string()),
            tls: None,
            tracing: false
        };
        println!("Creating client...");
        let client = RedisClient::new(config);
        println!("Created client!");

        let policy = ReconnectPolicy::default();

        tokio::spawn(client
            .on_error()
            .for_each(|e| async move {
                println!("Client received connection error: {:?}", e);
            }));

        println!("Checked errors...");

        tokio::spawn(client
            .on_reconnect()
            .for_each(move |client| async move {
                println!("Client {} reconnected.", client.id());
                // select the database each time we connect or reconnect
                let _ = client.select(REDIS_SESSION_DB).await;
            }));


        println!("Checked reconnections...");

        client.connect(Some(policy));
        println!("Trying to connect Redis...");
        client.wait_for_connect().await.unwrap();
        println!("Redis connected!");

makes it throw this error:

Client received connection error: Redis Error - kind: IO, details: Os { code: 113, kind: HostUnreachable, message: "No route to host" }

Given that the info about nodes in cluster is gathered with CLUSTER NODES (per https://docs.rs/fred/latest/fred/types/enum.ServerConfig.html#variant.Clustered), I assume passing a single IP is fine.
I guess this is some kind of a DNS/redirect resolution problem. I see that you're currently working on supporting custom DNS resolvers on the client, could this be related?

Keep up the great work, this is probably the best Redis driver for Rust at the moment!

P.S. When do you plan to release Streams functionality by the way (in terms of time)?

@ConstBur
Copy link
Author

ConstBur commented Dec 27, 2021

Here's the commands and Helm .yaml file I used if you'd like to replicate my setup:

helm repo add bitnami https://charts.bitnami.com/bitnami
helm install redis-name -f redis.yaml bitnami/redis-cluster

(Helm chart link: https://artifacthub.io/packages/helm/bitnami/redis-cluster)
redis.yaml

@aembke
Copy link
Owner

aembke commented Dec 27, 2021

Hi @ConstBur, thanks for the kind words.

(Editing this comment after learning that the docker driver was being used...)

See the recent comment on minikube and clusters.

In terms of stream support - I'm looking to get that released in the next couple of weeks. I'm almost done with 5.0.0 (currently on the feat/resp3-support branch) and then streams are next up right after that. Version 5 is a major rewrite with traits instead of my unsustainable attempts to get by with Deref to dedup functionality, and it should set up a nice foundation for both low level X* command support and a higher level client that looks more like a Kafka client.

@aembke aembke added the good first issue Good for newcomers label Dec 27, 2021
@aembke
Copy link
Owner

aembke commented Dec 27, 2021

TLDR: I think the issue is due to the IP addresses from the CLUSTER NODES response only being reachable from inside minikube (where the cluster is running).

In terms of fixing or working around it (in order of complexity):

  1. Switch to a centralized server instead of a cluster inside minikube (this is what we do at work). We then use clusters for the "real" environments where the app layer is completely inside the k8s cluster.
  2. Write a script such that you can work outside minikube but build and run inside minikube on demand. One of my coworkers did this and it was... not easy.
  3. Do something fancy to map those 172.17.0.0/8 addresses back to the minikube/virtualbox network interface (as seen by the host).
  4. Change the load balancer to rewrite IP addresses. This is likely very complicated though and requires intelligent modifications to payloads in-flight. It also might not even work if ports collide in the CLUSTER NODES response.

In the meantime I'll look into a one-size-fits-all solution that can be added to Fred if possible, since this is probably a common issue. Redis is uniquely ill-suited for running inside a virtual network where the client is outside that network with limited access to the network inside minikube.

At the very least I should probably add a bit to the README about why Redis clusters and minikube can be problematic.

@ConstBur
Copy link
Author

Wow, that's a very detailed response, thank you very much!
I run minikube with Docker driver on Linux, but I guess that shouldn't matter for the network-related problems like this. Looks like your diagnosis is right, there seems to be no other reason why Fred can't connect to the cluster.

I've tried adding RUST_LOG=fred=trace to cargo run but strangely no tracing logs actually appeared. Maybe I did something wrong, still investigating...

I have a server that I can use for hosting Kubernetes, so I'll try setting that up in the meantime.

Should I leave this issue open until there's a solution found/README note written?

@aembke
Copy link
Owner

aembke commented Dec 28, 2021

Oh that's good news if you're using the docker driver on linux. You have much more control over how the networking is set up in that case. You could probably dockerize the build command you use and everything would likely work (assuming you run the docker command with the same network as your minikube cluster).

Regarding the logs issue - try adding log and pretty_env_logger to your toml, then make sure you have pretty_env_logger::init() at the top of your main function.

I'll edit my comment to reflect the fact that you're using docker, since things are bit different in that case.

In the end though the general idea of using a central load balancer with redis tends not to work very well. Redis will tell clients to connect to specific IP addresses, and when you're using any form of a reverse proxy that tends not to work. Elasticache (the AWS managed Redis service) does quite a bit of work to make this smooth for callers, but from what I understand they had to customize both Redis and their load balancer to do what you're looking to do.

@aembke
Copy link
Owner

aembke commented Dec 29, 2021

@ConstBur I wrote two docs that will probably help answer some of your questions. They're currently on the feat/resp3-support branch, but should be in main within a week or so.

https://github.com/aembke/fred.rs/blob/feat/resp3-support/FAQ.md
https://github.com/aembke/fred.rs/blob/feat/resp3-support/tests/minikube.md

Also, I just went ahead and did the streams interface with this next release, so that should be up on crates.io in the near future.

@ConstBur
Copy link
Author

Holy crap, you explained everything like I'm 5 and in great detail at the same time! Alright, I'm definitely switching to a centralized server then since I can.

Logs now work, but not much interesting appears in them:

DEBUG fred::multiplexer::commands > fred-QI3tx0wN23: Initializing connections...
DEBUG fred::protocol::connection  > fred-QI3tx0wN23: Attempting to read cluster state from 10.102.232.81:6379
TRACE fred::protocol::types       > fred-QI3tx0wN23: Using 10.102.232.81 among 1 possible socket addresses for 10.102.232.81:6379
DEBUG fred::protocol::connection  > fred-QI3tx0wN23: Error creating connection to 10.102.232.81:6379 => Redis Error - kind: IO, details: Os { code: 110, kind: TimedOut, message: "Connection timed out" }
DEBUG fred::multiplexer::utils    > fred-QI3tx0wN23: Emitting connect error: Redis Error - kind: Unknown, details: Could not read cluster state from any known node in the cluster.
DEBUG fred::multiplexer::utils    > fred-QI3tx0wN23: Emitting connect error: Redis Error - kind: Unknown, details: Could not read cluster state from any known node in the cluster.

I'll close this issue then, everything's clear now. Can't wait for the next release!
Thanks very much again for the docs and streams! :)

@ConstBur ConstBur mentioned this issue Feb 23, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

2 participants