-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Intermittent test failure: Test_Client_InsertTriggersImmediateWork
#213
Comments
…test Some very, very small logging improvements that log when the notifier had started and stopped, like we already have for many other services. This is an attempt to help diagnose a future CI failure of #213, since I'm completely unable to reproduce this locally regardless of iteration count. Also, augment a log line in the client so it's prefixed with its service name and contains a `client_id`, which is useful here because it helps indicate which client ID was elected leader from logs alone.
…test (#214) Some very, very small logging improvements that log when the notifier had started and stopped, like we already have for many other services. This is an attempt to help diagnose a future CI failure of #213, since I'm completely unable to reproduce this locally regardless of iteration count. Also, augment a log line in the client so it's prefixed with its service name and contains a `client_id`, which is useful here because it helps indicate which client ID was elected leader from logs alone.
I was able to get one failure of this locally with There are some odd things about the ordering of operations in this test that I'm looking into. |
I’ve got some more info on this flaky test. I added some more logging in the test callback function, as well as in the
|
Nice investigating. Regarding (1):
Another strategy we can try here if the connect timeout doesn't fix it is to work our way in from the edges and put more test coverage, including stress tests), on the various components like notifier, then producer. There's almost certainly bugs and rough edges in there, and it'll help tease those out and hopefully improve the overall resilience of |
Haven't seen this one in days. Definitively fixed by #253. |
Notced this one in #212, but since it's not reproducible, and I've definitely seen it many times before from time to time, I don't believe it's related to my change. Even doing local run counts of 5000 iterations with
-race
, I'm unable to repro.Sample failure:
https://github.com/riverqueue/river/actions/runs/7955682330/job/21715052284?pr=212
The text was updated successfully, but these errors were encountered: