New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ZIO#cached #1361
Add ZIO#cached #1361
Conversation
Oh, I see what you did. I was more thinking along the lines of (pseudo-code) |
* duration `d`. Also returns a canceler, which can be used to cancel the | ||
* recomputation. | ||
*/ | ||
final def memoizeTimed(d: Duration): URIO[R with Clock, (IO[E, A], Canceler)] = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Canceler
makes the API trickier to use.
I'd suggest an alternate approach:
- Go with
Ref[Option[Promise[...]]]
as the core backing type. - When requested, fill the ref with a new promise that will be completed with the result of the effect.
- After the promise is completed, set a timer, and after the timer elapses, the ref will be cleared to None.
- Every get will trry to get the completion of the promise if it exists, but if it doesn't exist, it will go to (2).
This implementation will not require a canceler.
I would also call this method cached
and not memoizeTimed
.
Yes, updating the cache on client request if the time to live has expired is a much better approach that creating a background process to update the cache. Thanks for your feedback! This is ready for another review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks super useful! I just have 2 suggestions to make the code shorter.
final def cached(timeToLive: Duration): ZIO[R with Clock, Nothing, IO[E, A]] = { | ||
|
||
def get(cache: RefM[Option[Promise[E, A]]]): ZIO[R with Clock, E, A] = | ||
cache.update { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can use updateSome
and only handle case None
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great suggestion!
@@ -305,6 +307,21 @@ class IOSpec(implicit ee: org.specs2.concurrent.ExecutionEnv) extends TestRuntim | |||
) | |||
} | |||
|
|||
def testCached = { | |||
def incrementAndGet(ref: Ref[Int]): UIO[Int] = ref.modify(n => (n + 1, n + 1)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using update
instead of modify
, you won't have to repeat n + 1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes.
cache <- incrementAndGet(ref).cached(100.milliseconds) | ||
a <- cache | ||
b <- cache | ||
_ <- clock.sleep(100.milliseconds) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will be flaky due to non-determinism. We could use with MockClock
or add flaky
combinator to test (it's defined in RTSSpec
).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, trying to test this is actually what led me to identify the issues with the MockClock
. I used the flaky
combinator for now. If we get the MockClock
improvements in first we can switch it to use that or otherwise we can do it subsequently.
_ <- self.to(p) | ||
_ <- p.await.delay(timeToLive).flatMap(_ => cache.set(None)).fork | ||
} yield Some(p) | ||
}.flatMap(_.get.await) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great! There are only two ways to improve this:
- Make the ref a try state (empty, pending, or full), so two fibers don't try to fill it at the same time
- Deal with interruption, i.e. ensure interruption of a
get
does not prevent the cache from ever being filled
The first would necessitate the second, I think.
In any case, looks good to me, 👍 to ship!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I updated it to deal with interruption by using uninterruptibleMask
so only the waiting on the promise is interruptible. Regarding your first point, I thought I had addressed this by using RefM
. My thinking was that if two fibers called get
at the same time the first would create the promise, fork the fiber to fill it, and take a reference to that promise, and then the second would wait for the promise creation / forking and take a copy of the same promise. So the empty / pending / full states would correspond to none, some pending promise, and some completed promise. Does this work or am I messing something up here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, yes, you did. I forgot you were using RefM
. Effects will be synchronized appropriately.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome work!
Thanks! |
@NeQuissimus Is this what you had in mind? There is a risk of a resource leak if the memoized value continues being updated when it is no longer needed so I returned a canceler as well. I think the test takes too long but I was getting flakiness if I used a shorter duration. We can refactor to use
MockClock
when we get #1360 in.