Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improvement: ReadOnlySlave #830

Open
allanwax opened this issue Dec 22, 2014 · 17 comments
Open

Improvement: ReadOnlySlave #830

allanwax opened this issue Dec 22, 2014 · 17 comments

Comments

@allanwax
Copy link

https://www.dropbox.com/s/nnei6kz9dudoz3l/ReadOnlySlave.java?dl=0

Here's a first cut at providing functionality to send reads to the slaves and keep the masters open for write. It falls back to reading from the cluster normally if there is a problem. It refreshes the master slave configuration periodically in case there are failovers, etc. It is possible to not fetch values if keys have moved during key migration or config updates because the commands are going to the slaves. Users of this need to be aware that if keys/configuration are in flux, results are not guaranteed.

This is a first cut and probably not all read operations have been implemented. While a smoke test has been done on this it by no means has undergone rigorous testing.

The key to making all this work is issuing a READONLY command before the operation. It is being done with Lua seince Jedis does not support READONLY currently. None of this works without READONLY and the READONLY is apparently only effective in the session that invoked it.

@allanwax
Copy link
Author

minor fix to the code above

private Set<Tuple> setOfTupleResult(Jedis jedis, String script) {
    final List<String> members = (List<String>) jedis.eval(script);
    Set<Tuple> result = new LinkedHashSet<>();
    Iterator<String> iterator = members.iterator();
    while (iterator.hasNext()) {
        result.add(new Tuple(iterator.next(), Double.valueOf(iterator.next())));
    }
    return result;
}

@allanwax
Copy link
Author

For the future:

The above body of code uses Lua to accomplish its goals. When READONLY is implemented in Jedis then the actual execution part (after figuring out where to send it) becomes a pipelined request.

// ...
Pipeline pipeline = jc.pipelined();
jc.readonly();
jc.mumble(...);
pipeline.sync(); // with appropriate fetching of results
// ...

@HeartSaVioR
Copy link
Contributor

Please follow up https://groups.google.com/forum/#!topic/jedis_redis/u6j8slokO3E and https://groups.google.com/d/msg/redis-db/4I0ELYnf3bk/Lrctk0ULm6AJ.

tl;dr. We can't use pipeline with Redis Cluster. Instead we can use transaction, but it should not pipelined, it should be batched.

Redis Cluster forces client to be smarter in order to let Redis Cluster be simpler.
And there're many limitations compared to Redis that you may check it out.

@HeartSaVioR
Copy link
Contributor

So I'd rather not adopting READONLY unless redis/redis#2216 is open and antirez doesn't come up with solution.

@allanwax
Copy link
Author

Pipelined is a secondary issue for this. The main issue is using the slaves more efficiently.

@HeartSaVioR
Copy link
Contributor

You can run benchmark to compare throughput ping-pong and pipeline. When I tested it pipelined benchmark is at least 10x higher.
If you really want throughput, making JedisCluster support batch is the first issue.
But if you really want running OLAP like commands(I mean long running commands), using slaves is still valid point.

I still afraid that Redis sentinel and Redis Cluster doesn't take care of slaves so connection failures could be come up easier than masters.
What action Jedis can supposed to do?

@allanwax
Copy link
Author

The original reference to pipeline above was in reference to the code which has a link in the first entry. This issue is not about pipelining. Please pretend it was never entered here as it is merely an improvement for the code for the future. The pipelining was mentioned as an alternative to the Lua script which currently exists in the code I posted. This issue is using the slaves more efficiently.

@HeartSaVioR
Copy link
Contributor

I just said that we may not need to use slaves to query, because with batch mode we can query to masters more faster.
But it's OK, maybe I have to stop annoying.

Btw, I still not interested to use 'unstable' slaves, which applies to Sentinel also, so I'd rather not adopting READONLY unless antirez doesn't come up with solution - same as previous.

@taer
Copy link
Contributor

taer commented Feb 18, 2015

We also have a need to be able to query a slave for load splitting, and it's tolerant of stale data. I added a readonly command to BinaryClient which solved our problem, and pushed that to the Jedis class. I have code similar to @allanwax to get master/slave/slot information. Our use case is actually connecting directly to a redis-cluster slave node via loopback and plain JedisPool, then ensuring we only ask for data the slave knows about, but the idea is the same.

The problem is readonly only applies to cluster redis servers, and only is necessary if you're using the Jedis class directly instead of the JedisCluster class. I think a clean option for readonly would be to return a boolean instead of erroring. This way, the JedisCluster implementation of readonly() could simply return true. And your non-cluster redis clients would get a false if they called it incorrectly.

@allanwax Your lua script doesn't seem to work against redis rc3. Seems lua doesn't see that flag when it executes.

127.0.0.1:7000> hvals data:794
(error) MOVED 13491 10.37.110.68:7000
127.0.0.1:7000> eval "redis.call('readonly'); return redis.call('hvals', KEYS[1])" 1 data:794
(error) MOVED 13491 10.37.110.68:7000
127.0.0.1:7000> readonly 
OK
127.0.0.1:7000> hvals data:794
1) "hi"
127.0.0.1:7000> 

@taer
Copy link
Contributor

taer commented Feb 18, 2015

Here is the initial pass at adding readonly: 6020bdb6af57848c1985f1aeb0c0494bef9d3810
I could submit a pull request for this

@Yanson
Copy link

Yanson commented Jul 6, 2015

@allanwax What's the status of this issue? The original Dropbox link is not found.

@allanwax
Copy link
Author

allanwax commented Jul 6, 2015

I will put a new link up. It may have been deleted in housecleaning since it's been a while.

@allanwax
Copy link
Author

allanwax commented Jul 6, 2015

@Yanson https://www.dropbox.com/s/nnei6kz9dudoz3l/ReadOnlySlave.java?dl=0

Notes: This needs to be improved and armoured against failure.

@allanwax
Copy link
Author

@DanielYWoo
Copy link

We have the same needs for read load splitting. We have two slaves for each master but we cannot read from the slaves, even stale data is acceptable.
The only problem is when a master/slave fails over or adding/removing nodes, JedisCluster client will receive MOVED frequently, but eventually JedisCluster will keep up with the new cluster nodes/slots info, but I think it's acceptable, right?

@allanwax
Copy link
Author

In the code I posted a while back (if I remember all this right), two situations are handled. If reading from the slave fails the failover is to read from the master. The second task going on is to periodically update the master and slave relationships and then re-attempt reading from slaves again.

Reading from the slave helps for speed but also the task needs to be completed. this is the reason the failover is back to the master.

Copy link

This issue is marked stale. It will be closed in 30 days if it is not updated.

@github-actions github-actions bot added the stale label Feb 15, 2024
@sazzad16 sazzad16 removed the stale label Feb 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants