-
Notifications
You must be signed in to change notification settings - Fork 13
"Redis query EventsByTag must not see new events after complete" test is not implemented properly #6
Comments
Hi, With the current implementation I would say that the tested behavior is the implemented one. The semantics of I agree this brings less guarantees to the user and may lead to apparently strange behaviors. It should be possible to implement the same semantic than the LevelDB backend, and I will investigate how this can be achieved. In any way, documentation should be more explicit about the behavior. Thanks for reporting. |
All known implementations of |
Not necessarily, you can retrieve data in batch up to a certain point in time (say request time). You can’t always (or even wish to) load all events into memory at once. E.g. you might want to keep the memory footprint as low as possible. |
I investigated and there is an easy way to implement the same semantics as for LevelDB, I will work on it ASAP. |
I've found the same problem for |
Yes, I implemented all the |
I see the similar issue in return remaining values after partial journal cleanup test. You have requested two items but should have requested only one item. The current stream should be completed right after delivering the whole buffer. Maybe you should change this line https://github.com/safety-data/akka-persistence-redis/blob/master/src/main/scala/akka/persistence/query/journal/redis/EventsByPersistenceIdSource.scala#L213 to |
Yes it seems this condition is missing. I will perform a rewriting and complete review of implementation and tests. |
a green cucumber
should be sent before line 111, not afterhttps://github.com/safety-data/akka-persistence-redis/blob/master/src/test/scala/akka/persistence/query/journal/redis/EventsByTagSpec.scala#L117
Right now
currentEventsByTag
query could fetch the data from Redis more than one timeThe text was updated successfully, but these errors were encountered: