New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can redis key space notifications be pushed to the redis stream instead of pub/sub channel? #5766
Comments
Hello @jharishabh7 Only modules can do this at the time being. |
Thank you @itamarhaber for the confirmation. |
@jharishabh7 can you describe the use case? which key space notifications would you like to be pushed to streams? |
@gkorland - Well the use case is that we want to get notifications, if status of a Data structure in redis changes(in our case if something has been pushed to a sorted set). So we thought we can use key space notifications for this purpose. But key space notification internally use pub/sub channels of Redis which are fire and forget. So in case our client disconnects we will lose some notifications. Also we want to use something like a consumer group where only one of the client gets an event if we have multiple instances for scalability purposes. So we thought it would be nice if key space notifications pushed the events to a Redis stream instead of a pub/sub channel. As we want the channel to be reliable and scalable. And that is why I was wondering if this is possible to do in redis. And @itamarhaber said it is not possible to do it without building a module for it. So we are now exploring blocked pop to solve our use case. But I think it would be a nice add-on to Redis functionality. |
I think we'll end having such support soon, because it is trivial and it solves a huge problem with lost messages in keyspace notifications... |
Thank you so much @antirez . That will be awesome! |
Thanks for validating the usefulness of this. It's quite some time that I think about it. Another "internal" use case I've for streams is to publish (not by default, only if enabled) server metrics every second to some stream. Memory usage, clients, iops-per-sec, ... Of course the stream will be capped but it's simple to retain a few days without much memory usage. |
Yes, that would be awesome too. Let me know if I can contribute in any way and if you are open for receiving some PR's. |
I’m working on a serverless stream consumer dispatcher. It would be very helpful if we could listen to the key space in a reliable way. In my case, that would allow users to create arbitrary streams and got consumer groups attached to it as soon as possible. |
My use case here would be to watch a stream for key expiration events then to use gears to listen to that stream and then remove that document from a redis search index and/or redis graph |
Just to make you aware. Right now the blocker on this is: what to do with Cluster? Users cannot specify a single key name in the cluster, the local instance may not map with such slot for instance. Also, do we want unified or non unified events in that case (per single instance or a single key in some instance)? So far I'm thinking about two very different solutions:
It's a big choice. There is this other possibility:
|
WRT 1 - that would potentially create a very hot master shard - smells off. Another, perhaps half-baked, possibility: 3.5 Have the cluster connect to another, external, Redis database (single or cluster, proxy or not, HA or not, whatevs) for that purpose. In the case of a single instance, it can use 'local' and perhaps some configurable DB number. |
Is this still on the table? We would like to get |
Hello @andrascz
I think it is still on the table, but no action has been taken in this direction so far. That said, it is possible to achieve this with modules, for example RedisGears. Here's a function that acts as an expiry trigger in a cluster (or standalone) and adds the key names to a stream: EDIT: this is just a quick poc snippet to show the principle - the real solution should be slightly more complexified TBD
|
Thank you for the pointer and example. This is even more powerful than the notification as in our use case we would generate a new element in a Redis List for every expired notification message if the key matches a prefix. I can accomplish this with only RedisGears by adding some extra logic in the executed Python code. So no need to read a stream from our services. |
@itamarhaber That's a great solution, thanks for the post. I tried it but something is going on that I can't understand. |
Here is a draft design:
The above is easy to implement, but there are still some problem I didn't figure it out(it's about replica and persistance, the key point is if the keys in meta database belong to real data or not? notify pubsub channel doesn't belong to real data) :
|
to share one concern: On alternative is maybe to put this in the hands of the user in some way, i.e. the user explicitly calls some command like anyway, just sharing one more concern, maybe someone will be able to come up with a winning idea. |
This has been the subject of long sessions involving @oranagra @YaacovHazan @MeirShpilraien @guybe7 @itamarhaber @inbaryuval and others. I'm posting a brief summary of these discussions here for reference. The bottom line is this gets more complex when we consider cluster environment and various scenarios, so very strong arguments about the value of this feature are needed to drive any further work. Rationale - Why do it?Redis already supports keyspace notifications to make it possible to track changes and operations on the server side. There are many use cases that can benefit from this mechanism, including:
The main limitation of the existing keyspace notifications mechanism is that it offers very few guarantees. Notifications are sent once to connected clients and are not stored anywhere, so a client that drops a connection may lose any arbitrary number of notifications. Reliable KSN (RKSN)The goal is to define a keyspace notifications mechanism which is reliable. The properties of this solution are:
RKSN storage and accessThe RKSN properties are identical to what Streams provide, so we considered the option of simply using stream keys, which can be accessed by clients using existing stream commands. However, using regular stream keys creates a conflict with Redis Cluster as the key name must match the locally assigned hash slots. Redis could override that in the case of a notifications stream key, but this would also impact cluster-aware clients that would need to access the notifications stream key on all nodes regardless of hash slots topology. Another alternative we considered was using a dedicated pseudo-database and The final option was to store reliable notifications as special meta-data that lives outside the keyspace. The interface still resembles streams, but there is a dedicated (e.g., Replication and persistenceRKSN data needs to be persisted into RDB, for several reasons:
The stream of notification includes additional meta-data, such as timestamp or stream entry ID that provide more information about the operation and support a client-side cursor mechanism. Because of this, notifications must also be propagated locally to AOF and explicitly replicated to replicas. On the replica side, RKSN should be inhibited when processing commands on the replication link. Redis ClusterSlot-level orderingIn a cluster environment there is no concept of total order between operations that take place on different nodes. Because hash slots may also migrate between nodes, RKSN can effectively guarantee total ordering of notifications only for a specific hash slot. Because of this, notifications should include an explicit hash slot identifier so clients can easily distinguish between ordered and unordered events. RKSN AggregationCluster aware clients need to read RKSN from all cluster nodes, as even for a single key there's no way to guarantee that the full RKSN history is available on a single node. Clients will be able to aggregate, order, and if necessary de-duplicate notifications received from different nodes. MigrationThe cluster key migration mechanism needs to be enhanced to also support migration of RKSN entries. This operation does not need to be atomic as notifications may live on both source and target nodes during migration, but once a slot has been fully migrated we must guarantee the importing node has the full notification history. ConfigurationRKSN have a more significant impact on resources (memory in particular) and should have several configuration options to control that:
Open IssuesFLUSHALLShould the Writable ReplicasWe currently assume that replicas propagate received notifications and never generate local notifications on commands received on the replication link. Should writable replicas generate and propagate notifications for commands received from local users? |
@yossigo Thanks for the detailed explanation. I would like to work on this. I see few alternatives mentioned above. Is there any recommended approach already or should I put up a proposal ? |
What's the current status of this? I would like to enable CDC like usecases for my current redis cluster |
the current status is summed up in the last big comment. |
@YourTechBud Could you explain your usecase a bit more? Currently the notification you would receive on SET operation are the below two. This doesn't have the complete data (value part) to build CDC usecase. How do you plan to tackle it ?
|
any news on this? |
We have a requirement that we need to get a notification on changes to a Redis data structure. Based on my research I found out that I can use Redis key space notifications for doing the same. However, Redis key space notifications send the events to Redis pub/sub channel which is fire and forget i.e once the clients lose the connections all the events till the connection is up again are lost.
Redis streams solve this problem. Also, I want to use consumer group feature of Redis streams. So is there any way that Redis key space notifications can be pushed to Redis streams instead of Redis pub/sub channel?
I would also like to contribute if this feature does not exist already.
The text was updated successfully, but these errors were encountered: