-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
resolves #899 Flaky ZManaged tests #905
Conversation
private def zipParFailReservationRelease = | ||
unsafeRun { | ||
for { | ||
reserveLatch <- Promise.make[Nothing, Unit] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a withLatch
combinator courtesy of @regiskuckaertz that we may want to take a look at.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just took a look at it.
Looks awesome and we should definitely use it 👍
Question is whether is should update this pr or make another one were everything is migrated over to withLatch? I would prefer the latter option as we would immediately profit from reduced flakiness 😁
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then let's do the latter!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool 👍
What needs to be still changed in this pr?
unsafeRun { | ||
managed.timeout(1.millisecond).use(res => ZIO.succeed(res must be_===(None))) | ||
for { | ||
latch <- Promise.make[Nothing, Unit] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great to see all the sleeps disappear! 🎉
) | ||
unsafeRun { | ||
managed.timeout(100.millisecond).use(res => ZIO.succeed((res must be_===(None)) && (release must be_===(1)))) | ||
managed.timeout(50.millisecond).use(res => ZIO.succeed(res must be_===(None))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These sleeps will be finicky. Even though 50 < 60 theoretically, in reality that won't always be the case. Is there another way we could structure this test?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this one is structured the same way in that it expects the execution to take longer than 50 milliseconds.
But after giving this a second thought, this test is always going to be nondeterministic. I'll remove it for now.
) | ||
unsafeRun { | ||
managed.timed.use { case (duration, _) => ZIO.succeed((duration >= 200.milliseconds) must beTrue) } | ||
managed.timed.use { case (duration, _) => ZIO.succeed(duration must be_>=(40.milliseconds)) } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one looks more reasonable since sleeping for 20 ms + 20 ms should always involve the passage of >= 40 ms of time.
Should, of course. 😊
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hopefully :D. You never know with clocks
This one will probably be prone to quite a lot of false negatives though. Should we just remove it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have never seen a false positive here, because scheduling + overhead will never come up short. You could be conservative and do >= 39
, but I don't think you have to worry about this one.
Think. 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant that if the actual code is buggy the test would have a good chance of still passing. Even if only reserve or acquire was measured, etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah in that case, the test has marginal value, maybe best to either delete, or run it a bunch of times like a few other tests in RTSSPec.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few minor comments, but this looks fantastic overall. Huge number of sources of indeterminism are now gone. 🎉
@@ -481,6 +482,8 @@ final case class ZManaged[-R, +E, +A](reserve: ZIO[R, E, Reservation[R, E, A]]) | |||
timeoutReservation(reserve, d).map { | |||
case Some((spentTime, Reservation(acquire, release))) if spentTime < d => | |||
Reservation(acquire.timeout(Duration.fromNanos(d.toNanos - spentTime.toNanos)), release) | |||
case Some((_, Reservation(_, release))) => |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed a possible leak here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice. 👍
@@ -465,14 +465,15 @@ final case class ZManaged[-R, +E, +A](reserve: ZIO[R, E, Reservation[R, E, A]]) | |||
) | |||
}, { | |||
case (_, leftFiber) => | |||
leftFiber.interrupt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure about this one.
The way it is right now this would interrupt the reservation step of Managed, which is inconsistent with the rest of the api. Furthermore await the result of the reservation could lead to unintuitive behavior with it is uninterruptible.
I believe the updated implementation is a nicer way of handling this. @jdegoes what is your opinion?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reservation should never be interrupted, and by design, is generally not interruptible, anyway. So yes, we should await
. 👍
@mschuwalow Let me know when this is ready for a final pass! |
Thanks for your work here! Merging now so we can help remove spurious test failures sooner. |
No description provided.