-
-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix CI #3064
Fix CI #3064
Conversation
I think I know at least a big part of the cause. Consider these flaky tests: def test_flaky_timeout(self):
if random() < 0.5:
time.sleep(20000)
def test_flaky_error(self):
if random() < 0.5:
raise Exception() If you run with Now, the thing is, before #3059, we had arbitrary timeouts that threw an error after, like, 30 seconds, and effectively turned timeouts into errors. So, a quick and easy solution would be to revert #3059. IIRC think that the tests were flaky and failing sometimes before already - plus, in an ideal world, it wouldn't be necessary to re-run them at all. But we should probably try to get re-running to work first. As for why tests are failing in the first place... from the log, it seems like there are a lot of different causes:
Seems like the mail is found on the server and then moved to the DeltaChat folder - this happens because @link2xt do you know if maybe the server doesn't send an
|
There is a related issue pytest-dev/pytest-rerunfailures#99 suggesting Another maybe related issue: pytest-dev/pytest-rerunfailures#157 |
CopyUid is sent on the Inbox connection, but Movebox connection should still be interrupted with Exists, because it could be even from a client from another device. |
I just tried this out (my code from above with |
Works for me with
in
|
Nice, thanks, works for me too! |
I pushed it as 6c6d47c with some description in the commit message |
Hopefully this is even more robust than before, because it will rerun all timeout-related failures rather than only our own manual timeouts. |
We are still affected by pytest-dev/pytest-rerunfailures#157 though. When a worker crashes, it is replaced but forgets to do reruns. |
#skip-changelog
The tests are flaky: (sometimes they just run through, though; I repeatetly hit re-run, the first two runs succeeded, in the third run six tests failed at once)
https://pipelines.actions.githubusercontent.com/0Cs9EsF2mwtr7icbiRvpCNrfoVF185ozSjOeo0Ale4nDoOrShu/_apis/pipelines/1/runs/8174/signedlogcontent/9?urlExpires=2022-02-07T20%3A40%3A06.1888390Z&urlSigningMethod=HMACV1&urlSignature=%2FO5JxNmTDQpKnBGVfL8wDKg7TVaN1ypDfmhQdepPyw4%3D