New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tests: random order can lead to test failures. #2825
Comments
The problem here is not the random test order. You can see a run where the tests passed with seed If you want to run the tests in the same order, use the same seed for every run. You can set it via the Also ensure you have given your build environment sufficient resources to run the tests, otherwise they may fail in mysterious ways. If you think you are running the tests with sufficient resources and you have consistent failures, you can report those specific failures in issues here. |
Hey @dentarg , thanks for the information. For now I set a fixed seed and increased the soft limit for file descriptors ( Unfortunately I still get occasional test failures with "Bad file descriptor". Does that mean I should increase the ulimit even more? Best regards |
Please try raising it, honestly I don't know exactly what resources are needed to run the tests. You could look at the specs for the hardware GitHub Actions use to get a feeling for it. |
Looking at the github actions ( https://github.com/puma/puma/blob/master/.github/workflows/non_mri.yml#L71 ) I don't see any resource changes. |
@Segaja Locally, I'm seeing the same error and more in the same test file. All are 'IOError: closed stream' |
@MSP-Greg I can try to run the tests against the changes in that PR later this week. I'm doing this as I'm packaging puma for Arch Linux ( https://archlinux.org/packages/community/x86_64/ruby-puma/ ). Arch Linux currently works with |
@MSP-Greg I finally had time again to look into this. I can't apply the changes from #2830 directly, since I'm building from the However when I recreate the patch myself locally I can run the build/test process and so far it feels more stable then without the change. In 5 runs with the change I only had once this error:
|
The default order of minitest is random which means the order of the tests is not deterministic.
This can lead to issues in some runs:
If I rerun the tests a few times without changing any code at some point it will succeed.
This makes packaging this for Arch Linux very difficult.
One solution could be to make minitest run the tests in a fixed order and make sure they work then.
The text was updated successfully, but these errors were encountered: