-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Retry status action #32
Conversation
My stance is that most users will not need the count in their action. Keeping the same API means that users will be able to upgrade to 0.7 without any code change, which seems valuable. For the minority of users that *do* need count, they can expect to need to add code anyways.
Predicting another future request of getting more stats in the handler and monadic action, we make a retry policy take a RetryStatus which contains the retry number and cumulative delay so far. We cannot include retries remaining or delay remaining due to the fact that RetryPolicy is a monad. I note that this changes lots of Int types to RetryStatus so the effort to prevent breakage is for naught, as the only people who would be able to upgrade without changes are ones who used "const" with no type annotations of Int to be found.
This should be ready for review now. We decided to not export the constructor of RetryStatus and instead export the field and generic lenses. The lenses don't incur any dependencies on lens and should be compatible with the common lens tools out there. Because of the type change required by RetryStatus, we already lost the battle on breaking changes so I opted to not add specialized versions of retrying, recovering, and recoverAll and just make the change requested in #17.
liftIO (threadDelay delay) | ||
go $! RetryStatus { rsIterNumber = rsIterNumber s + 1 | ||
, rsCumulativeDelay = rsCumulativeDelay s + delay | ||
, rsPreviousDelay = Just (maybe 0 (const delay) (rsPreviousDelay s))} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think these record updates accumulate thunks if the client code f
doesn't touch them. I think it's better to make those fields strict.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@maoe Just updated the PR to make all the fields on RetryStatus strict. Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we come up with some testing code to confirm no leaks occur? Maybe we retry something up to, say, 100K times with no delay to see if memory consumption grows?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just ran a little test with 100K lines with and without the strict annotations and the difference was clear:
With strictness annotation, 9MB memory in use
cabal exec -- ghc --make -rtsopts -prof -fprof-auto Main.hs && ./Main +RTS -sstderr
132,045,960 bytes allocated in the heap
23,428,040 bytes copied during GC
4,272,224 bytes maximum residency (5 sample(s))
192,416 bytes maximum slop
9 MB total memory in use (0 MB lost due to fragmentation)
Without strictness annotation, 38MB in use:
cabal exec -- ghc --make -rtsopts -prof -fprof-auto Main.hs && ./Main +RTS -sstderr
[1 of 1] Compiling Main ( Main.hs, Main.o ) [Control.Retry changed]
Linking Main ...
136,845,960 bytes allocated in the heap
77,852,048 bytes copied during GC
18,744,504 bytes maximum residency (7 sample(s))
536,560 bytes maximum slop
38 MB total memory in use (0 MB lost due to fragmentation)
Curious, why is travis failing? |
Thanks for taking care of all this @MichaelXavier! |
For review and commentary.