New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix exceptions encountered during tests with python 3.11 #199
Fix exceptions encountered during tests with python 3.11 #199
Conversation
hmm, this fixes the other error it seems but creates a new error with dynamodb that I did not see locally, need to investigate. |
Thanks for looking into this. Since this is new behavior as of 3.11.5, another option would be to update the github action to use 3.11.4, and hope this will be fixed in a future patch release. |
that would be an option. Something that bothers me is the test execution in general. I see from the execution on GitHub, that 2 processes are spawned:
that seems to come from
The tests succeed in general. However, if I run the same command on my local machine some of the tests fail, e.g. when run with --numprocesses=2 to simulate the GitHub action environment. The tests only succeed locally when run in a single process.
Its most likely because the tests are run in parallel but access the same test-resources and thus the assumptions being made in the test are not valid anymore. However, I dont understand why it succeeds on GitHub? |
What errors are you seeing locally? The |
oh that is the trick I was missing, ty. |
The main difference between local and CI environments is the resources available. GitHub Actions runners only have 2 cores, so at most only 2 tests will run at a time, while your local machine is probably able to run 8+ at a time. So if there's a race condition, it's less likely to show up in GitHub Actions. It's possible that this |
ok pinned the 3.11 for now, test execution looks very clean now. |
There are various exceptions during test execution when run on python 3.11:
These seems to be related to this issue:
python/cpython#109538
This PR adds the workaround as described in the ticket.