New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Random failure in src/sage/rings/qqbar.py #28296
Comments
comment:1
This doctest is not supposed to take more than a second
Should I increase to 5? |
comment:2
Did you try running it on a raspberry pi, for example? I understand why you would want to test for speed regressions, but this isn't a good way of doing it. The doctest framework has already an overall speed factor of the machine, collected from previous runs. It is used for the "slow doctest" warning. At the very least this shoud be taken into account. For failed tests it should also display the actual time taken, not just crash in an AlarmInterrupt. |
comment:3
Replying to @vbraun:
Of course I did not check on raspberry pi. How should I test for speed regression then? The only framework available are doctests. The only purpose of #17895 was to speed up execution. There is nothing changed from an input/output point of view. |
comment:4
Sure. The point that I'm trying to make is that, at least for now, you have to be very conservative with the upper time limit in a doctest. |
Commit: |
Branch: public/28296 |
Author: Vincent Delecroix |
New commits:
|
Branch pushed to git repo; I updated commit sha1. This was a forced push. New commits:
|
Reviewer: Volker Braun |
comment:8
Replying to @videlec:
It's not just problematic when testing on a raspberry pi. For example our (nixos) buildservers rebuild and retest sage regularly. Sometimes the build servers may be under heavy load. Then a test can take an excessive amount of time, but it should not fail (which would fail the whole sage package). Performance regressions are hard to measure, although worthwhile. But they should be tested entirely separately from the functionality tests. |
comment:9
So as a summary, I propose to remove the alarm completely. If I put my laptop in suspend in the middle of running the test suite and wake it up again an hour later, the test suite should still pass. If we want to keep performance tests in the main test suite, they should at least be behind an optional flag so they are disable-able. |
comment:10
Replying to @timokau:
Could you open a ticket? This is not the only test concerned. And this should be documented in the developer guide: |
comment:11
Hiding it behind a # optional - benchmark (or so) sounds like a good solution. If you make a ticket I'll review it ;-) |
Changed branch from public/28296 to |
I'm seeing this with a rather high frequency:
I don't understand how it can run into an AlarmInterrupt
Component: number theory
Keywords: random_fail
Author: Vincent Delecroix
Branch/Commit:
12c0b20
Reviewer: Volker Braun
Issue created by migration from https://trac.sagemath.org/ticket/28296
The text was updated successfully, but these errors were encountered: